Thursday, April 16, 2015

Hybrid Heavy Lift Quad Copter

Electric Aviation is upon us. Now there are a few trainer aircraft from both Boeing and Airbus side, plus several more in the pipeline from china. Looking at Nasa side, even next generation passenger aircraft are in advanced research.

The pure electric trainers have a flight time of a hour, and recharge pretty quick, making flight cost very low.

The biggest problem, is how can we scale this into something that moves cargo.  A good example of why this is needed is Vietnam, where the roads are not designed for heavy cargo, thus get destroyed, and the rail system is antiquated.

It would be far better to build green-ways, designed as flight paths for cargo transport. Then we would have green roads, that are flown using hybrid electric quad copters for cargo. You may have seen a quad copter drone, small electric motors, flying around, some even have complete autopilots.

Well I believe now I've found all the key components as a foundation for a large scale heavy lift quad copter.  Quads are controlled by varying the thrust of the 4 engines. So most of the design is easy, for our cargo , we will need a little more, but its a very simlar concept.

Heavy Lift High Level Design:

A Outside frame, for connection of 4 engine pods. A engine pod consists of a 200 Kilowatt Brushless electric motors, and controllers. This will give our total 800kw lift ability. The brushless motors have controllers that connect to fligh control system.

Ducted Fans
The engine clusters are arrays of ducted fans. A single ducted consists of a engine casing duct, plus our brushless motor, and a propeller.  From advanced research on the topic, this will help increase the thrust, and give us better lift.


The power plant to generate electricity for the recharge of batteries uses a array of micro turbines.
This is a relatively new concept, with quite a bit of technology evolution in the last few years.
For my design, I tend to believe that Bladon Jets has the technology idea for this. The HVQ will use a array  of the microjets. This will provide power to charge the battery array, and operate the engine pods. There is redundancy built in to allow for failure of a microjet.

In addition, the flight controller will allow for precision gps guidance, and remote flight control. The frame will have a inner frame that maintains the container in a level configuration.

Factories simply need a set of landing pads, and local assisted gps transmitters.
The green-path, which is forested land would also have assisted gps.

A national flight control center would allow for coordination of flights, and delivery of containers.
In addtion, landing pads and refueling would be built along the green flyway.

Even if a HVQ Quad crashed it would be in a forested area, and in addition safety devices would be built in.

This would allow the transport of containers from on end of Vietnam to another, at high speed, with no build-out of roads. The engines are very efficient, and redundacies could be built in.

Wednesday, April 01, 2015

Containers - I really need those #$%$$## Containers

The future is paved with containers. I happen to run into a article that was really excellent. Its written in 2013. I actually went back and forth between the content, and the date, the writer really nailed it
perfectly with a good explanation. Modern containers, and Docker sprang on the stage suddenly it
seems, but the reality containers have been around a long time. The article I mentioned is: Containers are the future of cloud - 2013

While it might be fun to look at history, let talk about why containers are important.

1. Incredibly fast startup times - From 100,000 seconds to 1-5 seconds or less
2. Very low memory overhead - From  256 Megabyte VM to as little as 5 Megabyte of memory for a container.
3. Dependency Management Made Easy - Each Container can contain a totally different runtime, totally different packages and software, without the worry of conflict.
4. Ease of Install - With a standard registry - Installing a container can be as simple as giving it a name.
5. Reduced Cyber Threat Attach Surface - Since a container can be tuned to use only what is needed the amount of potential loop holes that a hacker can attack is far smaller.
6. SELinux/Cgroups  - This provides the isolation each other, and from the world. SELinux is recognised as a great way of providing security for the world most sensitive applications.
7. Build In SME - Subject Matter Expect - The use of a Docker file to build a container can encapsulate best practice of a expert and make it reusable.
8. Combining containers together across multiple hosts using the automation of Kubernetes allows for build complex Highly Available Clusters with one command. By stacking these, you can build a complex application quickly.
9. Containers are the nature technology solution for Micro Services, and combined with a DevOps driven People Process Technology frame gives the greatest agility to companies.

The technology stack of the future looks something like:

This is the stack for the next version of Openshift. If you want to read more: OpenShift 3.0 Architecture

Tuesday, March 31, 2015

Try BPM in 30 seconds or less.

I've always like BPM tools, which are part of Redhat's middleware suite. Its really a great way to solve complex business problems, with a rules engine that can be changed as business or compliance

As BPM is a large set of tools, usually its a bit of work to install it and get it going. Thanks to the new Openshift Hub, you now can install and start using it online in a public cloud in a few minutes.

I did it for the first time this morning, and sure  enough it just worked, and I could start using the web interface moments after going thru the OpenShift Hub.

The Hub is if you would like to try.

The result, after logging in as admin is:

I find this exciting, when you start looking at this in a DevOps, and IT agility point of view. When the future is containerised, common components, even as complex as BPM Suite, can be instanced in seconds, and connected into complex apps far faster than the traditional process.

Think about picking from a menu of services to build applications, and adding the glue as needed to make this all work incredibly fast.

Saturday, August 31, 2013

Why you should never buy Acer Gear - My Service Center Experience - Aspire

My wife's machine had slowed to a crawl, and it was time to update the software.
Windows 8 was in the cards, so happily tried to install it on the Aspire Z5771.

This is a nice looking machine, hobbled, by all the installed "extra's" to make it slow as
sludge from day one. The hardware specs as far as process is ok, and it was under performing day 1.

A lot of this is due to few factors I suspect:

1. OEM Bloat - All the freeware, etc that generally people do not use, but just takes space and memory
and boot time to make a slug out of your machine. (Least dell gives a option of not installing it on there
corporate machines). Short of a clean install, there's no way to get rid of it. I you have to get a "new" copy of windows, as the Acer install will put it all back again.

2. Sub-Par Bios - The bios on the machine was literally version 1, and had limited functionality.
It also didnt seem to have user bios recovery. (More of this in a bit)

3. No-Name Motherboard. Having managed to get the service manual, the motherboard is a
no name, "spec" motherboard. Meaning Acer wanted a general set of features, and didnt really
care "how" they were implemented. The service manual reads like a RFQ(Request for quote),
showing this attitude. Almost any motherboard from Simlim(Local IT Mall) would have been better. Branded  plastic, unbranded internals. This might have been ok, if #1 and #2 were up to par.

So the hope was Windows 8, would help a bit in #1. The problem is #2 prevented a windows 8
install. I wiped the machine, and started a Windows 8 install. Only to find Windows 8 failed, on reading I found that the Bios had to be updated to updated.

So after digging thru acer site, and finding the BIOS update, proceed to program it. My blessing were not complete that day, as the BIOS update hung, and failed. And of course no bios = no boot.

This does not phase me to much, as I know there usually hidden bios recovery, or you can do it with JTAG, or even a I2C programmer, having done this before.

So I took it to the Acer service center. Big mistake.

First they were arguing with a customer, not to honor his warranty. The backlight evidently overheated, and the display cracked. First, customer service denied anything was possible like
that, and were insisting no warranty.

That was not a good omen.

When I got my turn, I clearly ask for them to reprogram the bios, after telling them in detail, they refused to listen, they told me they didnt think such a thing was possible, and they would charge me $80 to "look" at it.

So I'm waiting till I dont know when, for I dont know what.

The staff evidently appear eager to treat customers badly, eager to avoid warranty claims, and eager
not to have repeat customers of the brand.

So your better to buy a apple, a dell, or any brand but acer. Acer brand does not seem to stand for customer service. I understand some people will try to "cheat" the warranty, but everyone does not, and it's insulting to have people with that attitude.

Its painful when a machine fails, and its compounded by bad attitude.

So note to self: Never buy Acer, ever again.

Tuesday, April 23, 2013

Home Lab Setup

Been Looking At a Perfect "Cloud" Lab, to Experiment with Redhat RDO Openstack
and to do that, I need to setup some hardware. Big Memory is key, and figuring out
the right processor to maintain low cost is not easy.

Looks like he Intel Core i5-3470 Processor is the idea solution, and using a Q77 based P8Q77-M/CSM with 32 Gig of Ram is idea.

A Synology DS411Slim seems to be idea for a small NAS to drive the testbed.
It uses 2.5 inch drives, while it only has 1 net port, the low cost would allow to have
two, to play with storage load balancing.

Wednesday, January 23, 2013

Design of Next Generation Power Plant

One of the biggest issues in the world is Power. Tradition coal-fired plants will always cause problems, traditional nuclear plants have huge downsides in waste and danger. What is the solution?

My belief is the next generation of power, will be based on Nickel Hydrogen Fusion, you push a hydrogen into nickel, and you get copper out, and a whole lot of heat. This technology I've been following for 18 months.

A ecat, which is a small cylender can generate upwards of 1600 degree's C, and the only waste is copper. Its good for 6 months. A 1 Megawatt plant is the size of a half 50 gallon drum.

My goal is to combine e-cat technology with few off the shelf and emerging technology to design
power plants that can scale from small size to 1 gigawatt using interchangeable components.

In preparation for this some of the key technology for this includes:

The ecat (As mentioned)

Starlight Insulator

This Gives you the ability to build and separate components that have a hot side at 1600 degree's on one side, and room temp on the other.

The next compoent, is a network connected controller, consisting of a microchip pic32 controller.

The concept is to "productize" the ecat into a reusable "cylender" with a hot side, for heat output,
active startlight shutter for safety and maintence, a startlight separator at the end, and a control assembly, consisting of pcb(Printed circuit board), that contain the pic32 controller, and connections for a standardized mating collar. The concept is the module becomes a hot-pluggable component with all localized control onboard, and a network connection, power and control connection.

This component can then be used in multiple to solve different problems. a 4 pack for example could power a vehicle, or a home, a 8 pack could power a long haul truck.

The next unit is to look at mounting collars to allow for plug-in of the modules. These will be designed for a circular stacking configuration. the concept is to have a cold core, with all the "cold" side of the modules facing in, at downward angle allowing multiple levels of cylenders to build a power tower.

A power tower, for example could house 12 cyclers per level, and stacked from 4 or 5 or for large scale fixed plants up to 40 feet tall, giving 1200 cyclenders per tower, and a reaction vehicle having mulitple power towers.

The design would allow for robotic replacement while operating.

This gives us a ability to scale to 1 gigawatt.

Wednesday, December 26, 2012

Maria Is Her Name - Better Performance and OpenSource

I was doing experimentation on implementing a internet search engine, and was finding mysql was proving the bottleneck. In surfing I found that MariaDB had "spun" out of mysql. MariaDB is a drop-in replacement for mysql, and is totally open source. My test bed is a mac-mini, and sure I can throw hardware at the problem, but its far more interesting to look at optimizing. So I used brew, a open-source package manager for osx, I installed MariaDb. I had a existing 20Gig database, and it picked it up, and used it just fine. The surprising thing is I saw a 30% increase in performance improvement, without any other change.

MariaDB has a new thread-pool manager, and lots of additional functionality to explore. I'm running 5 series, but the big push is now for MariaDB 10, which should appear soon, as its already in alpha/beta test phase.

Maria Overview

Sunday, August 28, 2011

RealTime and Rails

Sometimes you have a need to process things in real-time. Stock trading apps for example,
or real-time video. EventMachine, in Ruby is great. On top of this you want a gui or a
web page, so Rails naturally fits in. The problem is trying to do Real-Time and Rails in
the same sentence is not quite there. In a application I've done, I had exactly that,
I receive multiple HD cctv feeds in UDP, and handled the whole RTSP protocol using eventmachine.
The problem was every 5 minutes I wanted to save the current file information into
the database. I tried a few methods, but the most reliable turned out to use a em_http request.
This allows you to do async http in eventmachine. I of course targeted my rails stack that
was already running the application.

So this way, my realtime work can continue (async), and the rails stack gets a tickle
every 5 minutes to stay alert. Result: No more dropped packets. {
      http ='http://gshift-app/').get :query => {'keyname' => 'value'}

      http.errback { p 'Uh oh'; EM.stop }
      http.callback {
        p http.response_header.status
        p http.response_header
        p http.response


Thursday, March 24, 2011

Saving My Bacon when things go Wrong - Veeam Backup In Action

Sometimes you click something inoculus, and the world ends.
The button was not red, and it didnt have a safety cover.
Yet it did something really bad, and totally unexpected.

Had that happen to me this morning. Shutdown a file server,
and the datastore was gone. I understand why, as I was moving data
off the datastore, but I didnt expect it to go at that point.

What to do Now !?!?!

We had updated to Veeam 5.0 Vmware backup few months ago.
I decided to give it a try, as I had a backup from just hours ago
and no data had been updated.

I traditional tape, a restore is a multi-day affair with logs of downtime
and other unplesant activity.

With Veeam I still expected a 5-8 hour downtime as veeam did the restore.

Instead I decided to use the Instant Recover Mode. From start to having the
server running again was 2 minutes counting boot time. The Restore only took a minute and a half. At that point the server was running over a
virtulized nfs server automatically attached to the blade of my choice,
automatically added to my vcenter, and automatically started.

For the non-believers I've included a screen shot just to prove the point.
The server in question has been running now most of the day without issue
and its 60% migrated back to production storage using Storage vMotion.

This is really unbeliveable how well and easy it worked.

Wednesday, March 16, 2011

Parsing dates and times in english free form

I've been looking for this a while. I want to be able to take
something like "Every tuesday, starting on the 5th of april" and
convert it to computer readable form. Now in Ruby, you there is a
library to do it.

Its a ruby gem/add-on called Nickel by Natural Inputs.

Nickel is an API that extracts date, time, and message information from naturally worded text.
Why you should use it.

Simplify any form with date and time inputs
Increase your website's usability
Handles recurring date and time information

In my case I wanted to create calendar entries via a email message.

This way the subject line alone can give me alot of information.

Sunday, February 27, 2011

A Example of _BeforeDataInitialize for Microsoft Lightswitch Collections

When you have a Lightswitch edit screen, its not clear from the doc what
you need to do to init data. You have to create a instance of the item,
and put it into the collections selected item field.
Like So.

partial void PurchaseRequests_BeforeDataInitialize()
// Write your code here.
PurchaseRequest theitem = new Portal.PurchaseRequest();
theitem.PrState = "Unsubmitted";
theitem.Requestor = Application.Current.User.Name;
theitem.Subtotal = 0;
theitem.Total = 0;
theitem.CurrentcyType = "SGD";
theitem.DateCreated = DateTime.Now;
theitem.DateUpdated = DateTime.Now;
PurchaseRequestCollection.SelectedItem = theitem;

Friday, February 11, 2011

End of Nokia

Sometimes I really wonder if people understand the market. While generally I agree in most areas Microsoft does a great job, mobile devices as in phones is not one of them. Windows phone 7,has died to a large degree, even close partners are not happy by the market reality of windows 7. The market share for Windows mobile/windows CE continues to fall, with the iphone and android winning fast.

Actually palms offering/hp is far stronger than windows 7, and its the "third" choice as it is for the consumer. Android phones are cheap, and getting cheaper, with a wealth of applications, and reasonably quick software updates.

Nokia hardware and design is not bad, the problem is one of a company being in the business for too long a time, too many models, to many different os's
To the point that change management is just broken. (Try doing application development in Nokia space, its a nightmare of screen sizes and versions)

So the choice Nokia had were:
1. There new Linux based in-house effort, to long in coming, that its dead
2. Use android/google and have a close partnership with the other giant.
3. Use Microsoft - The underdog.

I can understand Nokia does not want to compete with HTC, but the reality they are.
HTC produces both Windows as well as android models. The android ones are incredible popular. So Nokia thinks they can make windows mobile better? After how many versions? Please, dont waste your money. Even on my industrial environment devices (Bar code scanners) I'd prefer to have android, it would be easier to develop.

I really feel nokia is missing the boat. Them joining the Android universe would be a smart move, it would put them in the game. Even offering ports to the existing hardware platforms(phones) would allow them quick entry into the market, without big retootling on the factory side. Newer versions of android really dont need big skinning jobs, so very quick to see a result.

I love nokia hardware, till iphones came, every phone I bought was a Nokia. Solid
hardware, cost effective, great ability to customize the products. The iphone changed
the game. Android changed it again. With android you can compete with iphone, with windows 7 dont bother to try.

Attention Nokia Board, u need to wake up. I agree that the "house" is on fire, and you have to take action, but dont use gasoline to put out the fire. Windows Mobile/Windows CE has not delivered, and really dont see it delivering. It had years, and never even touch much of your symbian market share. Look at what android has done?

I hope I'm wrong and your going to do both android and windows, and windows came first. If so. They you have a chance. But dont make windows mobile your final eulogy.

Tuesday, December 07, 2010

New widgets for rails 3

I'm used activescaffold in countless rails projects. Rails 3, resets the clock a bit.
With each major revision of the framework, I always re-eval my toolkit to see what is a better or more up to date fix.

While I'm happy with the work I've done combining rails and silverlight, and I've also tried some Silverlight WCF RIA projects, a pure rails project has some appeal.

Looks like there is another choice. Instead of using activescaffold, look at NetZke.
It leverages ExtJs, and gives very nice gui, with better look and feel than activescaffold plus widgets. Overall, I'd say the base site for a business app in Rails 3 plus NetZke will look so much better than AS plus Widgets Menu I've been using a while for backend ops.

Of course if you want smooth, go silverlight for the Gui with REST interface.

Tuesday, October 19, 2010

Returning to Roots - Comparison of Ruby and C-Sharp

Ran into this while doing my morning reading.
This is a great video that compares C-Sharp and Ruby.
As I'm currently doing both side by side, this is really great review
of the plus and minus of each.

The presenter is great, and there lots of code.

Wednesday, October 13, 2010

Limiting Data by User In Microsoft LightSwitch

I wanted to be able to limit my queries by user. In order to do this you need to add additional query code. This can only be done, if you create the query under the datasource. So for example, say I have a table, called
appointments, and the table also has a relationship to users. Right Click on Appointments, and Add A Query. If you try to do this under the screen, you will not be able to add the code.

Once you have the query, and rename it, you can click the Edit Additional Query Code button, on Properties.

The gorgous part of this, you can have a complex query, in my case there are three groups or'd togeather, and then on top of that we limit by user.

And it only takes a couple of lines.

 partial void MyTodaysAppointments_PreprocessQuery(ref IQueryable<Appointment> query)
       query = from Appointments in query
           where Appointments.User.Name == Application.Current.User.Name
           select Appointments;

Wednesday, October 06, 2010

Minimal Telnet Parsing in C#

Ever wonder what the minimal parsing is needed to do a telnet server?
While the socket/listen is easy stuff, and life for low session counts
is easy using a thread per session, the issue is, what do I need to do
to get to the point I can talk to my device. In this case a scangun
with a telnet client.

Telnet actually is a protocol. The PC Micro link is a good write-up
which wikipedia was so nice to provide a link to.

I google for a while, and could not find anything other than expensive .net libraries, or basic protocol documentation.

Here is the code, so you dont have to search any longer.

private void HandleClientComm(object client)
            TcpClient tcpClient = (TcpClient)client;
            NetworkStream clientStream = tcpClient.GetStream();

            byte[] message = new byte[4096];
            byte[] cmd = new byte[4096];
            int cmdcnt;
            int bpos;
            int bytesRead;
            const int tmode_normal = 0;
            const int tmode_iac = 1;
            const int tmode_option = 2;
            const int tmode_do = 3;
            const int tmode_will = 4;
            int mode;
            byte thebyte;
            BarCodeClient bci;
            cmdcnt = 0;
            bpos = 0;
            mode = tmode_normal;

           bci = new BarCodeClient(client);
            while (true)
                bytesRead = 0;

                    //blocks until a client sends a message
                    bytesRead = clientStream.Read(message, 0, 4096);
                    //a socket error has occured

                if (bytesRead == 0)
                    //the client has disconnected from the server
                bpos = 0;
                while( (cmdcnt < 4096) && (bpos < bytesRead)){
                    thebyte = message[bpos++];
                    switch (mode)
                        case tmode_normal:
                            switch (thebyte)
                                case telnet_iac:
                                    mode = tmode_iac;
                                case 0x0a: // LineFeed
                                case 0x0d:
                                    string thecmd = Encoding.ASCII.GetString(cmd,0,cmdcnt);
                                    if (thecmd.Length > 0)
                                    cmdcnt = 0;
                            cmd[cmdcnt++] = thebyte; 
                        case tmode_iac:
                            switch (thebyte)
                                case telnet_do:
                                     mode = tmode_do;
                                case telnet_se:  // End Subnegotiation
                                     mode = tmode_normal;
                                     bci.Cmd(""); // Let the lower level know to force a screen refresh
                                case telnet_sb:
                                    mode = tmode_option;
                                case telnet_will:
                                    mode = tmode_will;
                                    mode = tmode_normal;
                        case tmode_do:
                            switch (thebyte)
                                    mode = tmode_normal;
                        case tmode_will:
                            switch (thebyte)
                                     mode = tmode_normal;
                        case tmode_option:
                            switch (thebyte)
                                    mode = tmode_normal;


Tuesday, October 05, 2010

Creating a debian iscsi shared raid for backup of vmware esx systems

I usually keep a backup image on one of my test boxes for the production
systems, so I can have a easy way of recoverying in event of disaster.
This previous was a openfiler but found it managed to lose its disk.
So I decided to setup a equivlent system using debian. So using
the minimal network install, I setup a base debian, then added software raid, and finally iscsi target support. Note this is all running as a VM on top of esxi, with raw disks mapped to local sata drives.

First, map your raw disk using vmkfstools

vmkfstools -r /vmfs/devices/disks/vml.0100000000202020202020202020202020395653334e595335535433313530 disk1.vmdk

Next you need to go into the vm, and add the raw vmdk into the vm's config

Now in debian, you will see the raw devices at boot

Next install mdadm

apt-get install mdadm

and you need to install the iscsi target kernel modules

apt-get install iscsitarget-modules-2.6.26-2-686

Now setup the raid
mdadm --create /dev/md0 --level=5 --raid-devices=5 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

To check the status:
mdadm --detail /dev/md0

Now add the disk into the /etc/ietd.conf

Change the following:
#Assuming you want some security, a chap password on incoming user is useful
IncomingUser sinadmin xxyyzz

        # Fixup the security  
         IncomingUser sinadmin xxyyzz
        # Block devices, regular files, LVM, and RAID can be offered
        # to the initiators as a block device.
        Lun 0 Path=/dev/md0,Type=fileio

And finally:
invoke-rc.d iscsitarget restart

The get everything using our new config.

Surprisingly I found this no worse than using openfiler, and I know what is happening
behind the covers.

Thursday, September 30, 2010

A Debuggable Windows Service in Csharp

This is a great example of a windows service. I use it in a app that reads email from pop, and updates a database/webapp written in SilverLight for the FrontEnd.

One note, this uses a Timer Approach to initiate the check. I found unless I put a lock around the worker class,I would see the timer fire again, and I get get a breakpoint in another "thread", while debugging the first. The lock fixed the issue. But take note if you using this or something similar.

 using System;
 using System.Collections;
 using System.ComponentModel;
 using System.Configuration.Install;
 using System.Reflection;
 using System.ServiceProcess;
 using System.Threading;
 using System.Management;
 using Microsoft.Practices.EnterpriseLibrary.Logging;
 namespace WhereAmIEmail
   public class WhereAmIEmail : ServiceBase
     private Worker myworker;
     private Timer serviceTimer;
     private Container components = null;
     public WhereAmIEmail()
     // The main entry point for the process
     private static void Main(string[] args)
       string opt = null;
       // check for arguments
       if (args.Length > 0)
         opt = args[0];
         if (opt != null && opt.ToLower() == "/install")
           TransactedInstaller ti = new TransactedInstaller();
           ProjectInstaller pi = new ProjectInstaller();
           String path = String.Format("/assemblypath={0}",
           String[] cmdline = {path};
           InstallContext ctx = new InstallContext("", cmdline);
           ti.Context = ctx;
           ti.Install(new Hashtable());
         else if (opt != null && opt.ToLower() == "/uninstall")
           TransactedInstaller ti = new TransactedInstaller();
           ProjectInstaller mi = new ProjectInstaller();
           String path = String.Format("/assemblypath={0}",
           String[] cmdline = {path};
           InstallContext ctx = new InstallContext("", cmdline);
           ti.Context = ctx;
       if (opt == null) // e.g. ,nothing on the command line
 #if ( ! DEBUG )
         ServiceBase[] ServicesToRun;
         ServicesToRun = new ServiceBase[] {new WhereAmIEmail()};
         // debug code: allows the process to run as a non-service
         // will kick off the service start point, but never kill it
         // shut down the debugger to exit
         WhereAmIEmail service = new WhereAmIEmail();
     /// <summary> 
     /// Required method for Designer support - do not modify 
     /// the contents of this method with the code editor.
     /// </summary>
     private void InitializeComponent()
       components = new Container();
       this.ServiceName = "WhereAmIEmail";
     /// <summary>
     /// Clean up any resources being used.
     /// </summary>
     protected override void Dispose(bool disposing)
       if (disposing)
         if (components != null)
     /// <summary>
     /// Set things in motion so your service can do its work.
     /// </summary>
     protected override void OnStart(string[] args)
       myworker = new Worker();
       TimerCallback timerDelegate = new TimerCallback(myworker.DoWork);
       serviceTimer = new Timer(timerDelegate, null, 10000, 10000);
     /// <summary>
     /// Stop this service.
     /// </summary>
     protected override void OnStop()
       // test.Stop() ;

Hacking LightSwitch User Database Entry using Csharp .net and LinqToSQL

I want to be able to dynamically create Users in my Microsoft Ligthswitch applications. One of the issues is they use a spearate database.

Since my app accepts email messges, and creates records for the users, it rather important they exist in the system.

So the background Windows service uses LinqToSql to update the tables.

Each user in Lightswitch is in the aspnet_table, there is a record for each application sharing the same database. So if your creating a user, you also need to know which application guid you need to connect the user too.

  public int CreateUser(string UserName)
     User myUser = new User();
     myUser.Name = UserName;
     return (myUser.Id);
   public Guid CreateAspUser(string UserName)
     aspnet_User myUser = new aspnet_User();
     myUser.ApplicationId = MyAppId;
     myUser.UserId = Guid.NewGuid();
     myUser.UserName = UserName;
     myUser.LoweredUserName = UserName.ToLower();
     myUser.IsAnonymous = false;
     myUser.LastActivityDate = System.DateTime.Now;
     return (myUser.UserId);
   public Guid GetMyAppId(string appName)
     var query = from aApp in db.aspnet_Applications where aApp.ApplicationName == appName select aApp;
     return (query.First().ApplicationId);
   public Guid GetUserID(String UserName)
     var query = from aUser in db.aspnet_Users where aUser.UserName == UserName where aUser.ApplicationId == MyAppId select aUser;
     if (query.Count<aspnet_User>() == 0)
       return (Guid.Empty);
     return (query.First().UserId);
   public int GetPortalUserID(String UserName)
     var query = from aUser in db.Users where aUser.Name == UserName select aUser;
     if (query.Count<User>() == 0)
       return (0);
     return (query.First().Id);

So then it just a matter of calling the code:

 Guid theUserID = GetUserID(UserName);
          if (theUserID == Guid.Empty)
          { // No Such User
            theUserID = CreateAspUser(UserName);
          int thePortalID = GetPortalUserID(UserName);
          if (thePortalID == 0)
            // Need to create local user
            thePortalID = CreateUser(UserName);
          CreateAppointment(theUserID, thePortalID, Email.Subject, theMessage);  

ActiveDirectory and Csharp - Finding the domain user via email

Sometimes, how many examples, and none seem to work.

I wanted to lookup and verify a user exists via email, in a process that reads email via pop, and then creates records in Silverlight/Lightswitch application.

So how do it?

I've included the code below. Not that a normal user is fine for the username and password, and you must supply the domain controller. I used the IP of mine.

1:  public DirectoryEntry find_by_email(string email)
2:    {
3:      DirectoryEntry dirEntry = new DirectoryEntry("LDAP://","ausername","apassword",AuthenticationTypes.Secure);
4:      DirectorySearcher Dsearch = new DirectorySearcher(dirEntry);
5:      Dsearch.Filter = "(&(objectCategory=person)(sAMAccountName=*)(mail="+email+"))";
6:      SearchResult sResult = Dsearch.FindOne();
7:      if (sResult == null)
8:      {
9:        return null;
10:      }
11:      EventLog.WriteEntry("DoneEmailLookup");
12:      DirectoryEntry user = sResult.GetDirectoryEntry();
13:      return (user);
14:    }

Thursday, August 26, 2010

Doing a Offline Client Using Silverlight 4 and Rails Backend

I've prevously did a MultiMedia Silverlight app for a Video.
Now I'm doing a data driven app, more in line with Rails.
The issue with any web app is no internet = no work.
But how to have a nice web client and have a local database?
Espesially that can operate with Rails backend?

I Found a Great Solution for Doing this,in the form of a Database Product
called Siaqodb. I managed create a web app, with a datagrid in silverlight
that can run on the web or as a local app, and have local tables, and did it in a few hours.

For your local tables you can create a model in silverlight such as:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Sqo;

namespace SendIt
    public class SendItAttachment : ISqoDataObject 
        public string Name { get; set; }
        public string Path { get; set; }
        public DateTime StartTime { get; set; }
        public DateTime EndTime { get; set; }
        public int TransferTime { get; set; }
        public Boolean Done { get; set; }
        public Boolean Failed { get; set; }
        public string Hash { get; set; }
        public int FileSize { get; set; }
        public int OID { get; set; }

        public object GetValue(System.Reflection.FieldInfo field)
            return field.GetValue(this);
        public void SetValue(System.Reflection.FieldInfo field, object value)

            field.SetValue(this, value);



Its Important that you make sure your grid is set to autogenerate the columns
 <sdk:DataGrid AutoGenerateColumns="True" Height="273" HorizontalAlignment="Left" Margin="13,321,0,0" Name="SendingGrid" VerticalAlignment="Top" Width="780" />

Add a reference to SiaqodbSL so that you can use the database.

Next thing is to Instance the database, and load your data

       public MainPage()
            this.loginContainer.Child = new LoginStatus();
            this.Loaded += new RoutedEventHandler(MainPage_Loaded);
        Siaqodb siaqodb;
        void MainPage_Loaded(object sender, RoutedEventArgs e)

            if (Application.Current.IsRunningOutOfBrowser && Application.Current.HasElevatedPermissions)
                //use MyDocuments folder on client machine to store data
                siaqodb = new Siaqodb("siaqodb",Environment.SpecialFolder.MyDocuments);
                //use IsolatedStorage to store data
                siaqodb = new Siaqodb("siaqodb");

            SendItAttachment attachment = new SendItAttachment();
            attachment.Name = "";
            attachment.StartTime = new DateTime(2010,1,1);
            attachment.EndTime = new DateTime(2010,1,1);

            IObjectList<SendItAttachment> senditattachments = siaqodb.LoadAll<SendItAttachment>();
            this.SendingGrid.ItemsSource = senditattachments;

Thats the basics, we will get a bit facier in new articles coming soon.

Monday, July 12, 2010

ESXi - Resizing Disk

Wow, I was running in circles. I needed to resize a system disk on
one of my development machines, that running on the HA ESX/Vsphere
cluster. It was "grayed" out, and I could not change it.

It puzzled me, as I've resized dozens of times on other machines.
All my machines were created using the latest VM spec, but yet
no resize. I even pushed the disk back and forth between datastores
and it didnt come back.

What was the problem?

If you resize, you can have no snapshots.

Sure enough, deleted my snapshots, and everything was good.

Thursday, July 08, 2010

Vmware - Leasons Learned - View ESX vs ESXi ISCSI Backup

Wow, Its been interesting.

Designing and Deployment of a fully virtualized enterprise.

This includes:
1. Replacement of all desktops with thin clients.
2. Move from hosted email to a virtulized exchange environment.
3. iscsi based storage for everthing.
4. Blade Based Servers

Leasons Learned:
1. When your designing your blades, use at least a single SSD
drive to host ESX hypervisor. Yes, you can boot iScsi, yes you can
use a SD card.

a. A SD Card means your running ESXi, your updates consists of re-installating. I've had to do it three times. Vs ESX on a SSD drive you
can use update manager.

b. iscsi boot - Its a rather "flakey" feature. I put some time on this
there are two problems
1) Only a select number of iscsi HBA are supported.
2) It consumes a LUN on your raid for each blade.
c. If I use SSD, I can even ghost one to another using FOG.

2. Storage
My primary storage is a Dell Equal Logic Raid. A PS4000 to be
a) Get the biggest that will not get you ask to leave :)
I'm buying my second one, people tend to keep a lot of garbage
b) Make sure you put the latest firmware on the device Day 1
c) Be very careful about your storage Design. The EqualLogic
has great features. Such as overcommit at the LUN level.
The problem is when the Volume/LUN gets full, its gets marked
offline. And any VM's in the VMFS associated are pretty much
dead. So you key servers put in separate LUN's. A DC, and
exchange for example in my case should be allocated a dedicated volume. Then you dont have to worry that one overcommit takes out all. I would not recommend you do it for everything but some good division can save your bacon.
d) I would recommend for key servers, Domain Controller, Exchange, your Vcenter you dont thin provision them. You make sure space
is allocated all the way down. In that way you know there is not a lurking storage outage waiting.

3. Memory - Get lots, 8 x what you need. Blads + Vmware is not cheap, the
reality is vmware on the CPU side is great and pretty good on the memory side as well. What I found on the blades, is I could put 10x more vm's from a cpu point of view, but memory was red-lined. So I could get more value buying more memory than buy cpu's. So when I expand, I'll up my memory from 12G per blade to at least 48G.

4. Backup - Vreeam Rocks - Make sure you setup jumbo packets, and be a bit
patient on setup to learn the product. But it really works quite well. You need scratch space to dump the backup file. And use Yosemite to write the backup image to the tape library. Make sure the NICs can run jumbo frames it make a big difference, and get you a nice quad port gig-e card or a 10g interface.

5. What not to buy - Dont use a Dell R200 for anything for a couple of reasons. No Jumbo Frames, and second the CPU is just under powered.
Also it would not leat me use some ESX features, so ended up uses native.

Friday, June 11, 2010

Finding Identical and Different Images

Its a common problem to have lots of images, and when you do, it even more common to
have lots of duplicates. The problem with images, is they can be the same to the eye, but vastly different at the binary level. Different resolutions, slightly different cropping, but yet at a glance the same. In IT its common to calculate a hash, or magic number that changes radically for slight difference. What more useful in imaging duplication is changing slightly with near identical images.

This ruby plugin, does exactly that. I'm looking at using in a upcoming project, and will update the article as I go forward. What's even more exciting is the ability to do the same in video.

Thursday, May 27, 2010

Moving thin vm's between datastores on esxi

Wow, I wanted to move my vm's from testbed running on esx-i 4.0 to my production servers, which support iscsi.

The trick is to move them quick, without letting them ballon.

To do this you have to move the disk images: *.vmdk and the snapshots using
vmkfstools, plus the .vmx file and the .nvram file.

So as a example:

The original directory:
/vmfs/volumes/4bbda047-6fa7b452-4280-6cf049e2d29a/sin-fs-1 # ls -l -h
-rw------- 1 root root 80.0G May 27 01:37 sin-fs-1-flat.vmdk
-rw------- 1 root root 8.5k May 27 01:37 sin-fs-1.nvram
-rw------- 1 root root 501 May 26 07:14 sin-fs-1.vmdk
-rw------- 1 root root 0 Apr 30 10:07 sin-fs-1.vmsd
-rwxr-xr-x 1 root root 3.2k May 27 01:37 sin-fs-1.vmx
-rw------- 1 root root 263 May 5 01:33 sin-fs-1.vmxf
-rw------- 1 root root 600.0G May 27 01:37 sin-fs-1_1-flat.vmdk
-rw------- 1 root root 504 May 26 07:15 sin-fs-1_1.vmdk
-rw-r--r-- 1 root root 234.7k May 11 09:21 vmware-2.log
-rw-r--r-- 1 root root 135.3k May 12 06:29 vmware-3.log
-rw-r--r-- 1 root root 135.3k May 13 02:55 vmware-4.log
-rw-r--r-- 1 root root 135.3k May 25 04:03 vmware-5.log
-rw-r--r-- 1 root root 132.5k May 25 04:14 vmware-6.log
-rw-r--r-- 1 root root 135.2k May 26 07:07 vmware-7.log
-rw-r--r-- 1 root root 136.5k May 27 01:37 vmware.log
/vmfs/volumes/4bbda047-6fa7b452-4280-6cf049e2d29a/sin-fs-1 #

Note the disk image is 600G. Its actually a sparse file, that is very "thin". So we only want to copy the data, not the empty space. Also VM can have snapshot disk images, so you also have to copy them.

So in this case lets first copy the disk images:

vmkfstools -i /vmfs/volumes/datastore1/sin-fs-1/sin-fs-1.vmdk -d thin sin-fs-1.vmdk
vmkfstools -i /vmfs/volumes/datastore1/sin-fs-1/sin-fs-1_1.vmdk -d thin sin-fs-1_1.vmdk

Now we copy the vmx files:
cp /vmfs/volumes/datastore1/sin-fs-1/sin-fs-1.vmx .
cp /vmfs/volumes/datastore1/sin-fs-1/sin-fs-1.nvram .

You can now use the vSphere client datastore browser to import, or import at the command line.

Thursday, April 15, 2010

Reverting a Cisco LAP1142N to Atonomyous Mode

Wow, this was fun.

A friend of mine got in half-a-dozen Cisco AP's, and they all came with the wrong firmware. These are nice, slim flying saucers Wifi AP's with 2 radios supporting
2.4G and 5.0G operation. The focus of the line is to use a controller, my friend
chose to not buy a controller, but to buy autonomous units.

So when they came in. IOS was missing in action.

While there are notes on the web to do this, not appeared to work.

The basics is, to setup a tftp server at a magic address (,
the ap will boot, setting it self to, and check for a default image
based on model.

Didnt work at all, despite shooting a day.

I tried several different tftp implementions, including linux tftp, atftpd, and
finally two different ones on windows.

So after two days of pain, and some inspiration from one of my friends, I figured out
the secret sause.

1. Press and hold the mode button
2. Let it boot till it notices (You can see the messages if you plug in the cisco cable) It will say the mode button is pressed in the logs
3. It will fail to find the image need and dump you to the bootloader
4. At the bootloader erase the flash - format flash:
5. Reboot
6. set the IP address, network mask, and router, as the instructions are given in the log
7. tftp_init
8. tar -xtract tftp://thefirmwarefilename.tar flash:

If you mess up start back at 1

tftp is very sensitive to traffic, and to extra overhead in the server side,
so be careful about switches.

Sunday, January 17, 2010

Mac Printing - Getting Color to PCL Printers

Wow, what a pain. My friend has a couple of Docucentre III printers, and now they
have a quite a few macs in the office. Getting black and white printing is easy.
Just use the generic Black and white driver, and your printing. Now of course they
want to print color.

So after trying everything I can think I come to find out the standard printing subsystem, which uses Gutenprint. I upgraded to the latest, which is version 5.2.4,
and still no-soap. Just black and white printing. After reading the forums, seems like the generic Gutenprint 5.2.4 cannot print color.

So the choice is to add the second printing subsystem.
Its called foomatic, plus you need the RIP(RasterImageProcessor) which is a version
of ghostscript, and finally the print driver which is called pxmono.

For my case I downloaded the following:

The url for getting foomatic and gplg is:

The driver you need is pxlcolor, and its contained in pxlmono installer.

You must install all three, start with the gplgs, which is a shared ghostscript,
followed by foomatic, and finally by pxmono.

In your printer settings you now will have a printdriver for:
Generic PCL6/PCL XL Printer Foomatic/pxlcolor

After you added the printer, if it does not print, you must go to the web interface
and change the print-out-mode setting. To do that, you must go to set-printer-options, using http://localhost:631

Select your printer, the set printer option tab and change the printout mode from normal-graysacle to normal.
using your

This will print nicely on docucentre in color, and I suspect most PCL capable printers.

Sunday, January 10, 2010

Finding missing controllers and models

Ever get aggravated you didn't create a model or a base controller for your rails app.
I find its a common issue when large changes are afoot. Like when I create dozens
of tables at a sitting.

For those wondeful times, I've created a rake take that checks my models and controllers, and creates basic versions of the code.

 require 'rubygems'  
 require 'pp'  
 require 'find'  
 def controller_exists?(thefilename)  
   Find.find("./app/controllers") do |path|  
       if File.basename(path)[0] == ?.  
       if path.include?(thefilename)  
         return TRUE  
   return FALSE  
 namespace :check do  
      desc "Check Model, and create missing ones"  
      task :modelfile => :environment do  
           pp ActiveRecord::Base.connection.tables  
           ActiveRecord::Base.connection.tables.each do |tname|  
                themodel = tname.classify.constantize  
                model_exists = TRUE  
                model_exists = FALSE  
               case tname  
                when "schema_migrations"  
                   model_exists = TRUE # Special table  
                when "tag_cross_ref"  
                   model_exists = TRUE # Special table  
            if model_exists == FALSE # Dont exist  
             model_name = tname.classify  
             puts "Missing #{model_name} for table #{tname} - created\n"  
             mfilename = "./app/models/" + tname.singularize + ".rb"  
             if File.exists?(mfilename)  
               puts "#{mfilename} exists - Bad NEWS\n"  
             mfile = open(mfilename,"w")  
             mfile.puts "class #{model_name} < ActiveRecord::Base\n"  
             mfile.puts "end\n"  
             puts "Exists #{model_name}(#{tname})\n"  
  desc "Check Controllers, and create missing ones"  
  task :controllerfile => :environment do  
           ActiveRecord::Base.connection.tables.each do |tname|  
               controller_file_name = tname.singularize + "_controller.rb"  
               ctl_missing = !controller_exists?(controller_file_name)  
               case tname  
                when "schema_migrations"  
                   ctl_missing = FALSE #Special Table  
                when "tag_cross_ref"  
                   ctl_missing = FALSE # Special table  
               if ctl_missing  
             controller_name = (tname.singularize + "_controller").camelize  
             puts "Missing #{controller_name} for table #{tname} - created\n"  
             cfilename = "./app/controllers/adminspace/" + controller_file_name  
             if File.exists?(cfilename)  
               puts "#{cfilename} exists - Bad NEWS\n"  
             cfile = open(cfilename,"w")  
             cfile.puts "#generated by checker\n"  
             cfile.puts "class Adminspace::#{controller_name} < ApplicationController\n"  
             cfile.puts "layout 'tier1admin'\n"  
             cfile.puts "before_filter :login_required\n"  
             cfile.puts 'require_role "admin"' + "\n"  
             cfile.puts "\n"  
             cfile.puts "   active_scaffold :#{tname.singularize} do |config|\n"  
             cfile.puts "       config.actions = [:nested, :create, :update, :show, :list, :search]\n"  
             cfile.puts "       end\n"  
             cfile.puts "\n"  
             cfile.puts "end\n"  
             #puts "Exists #{controller_name}(#{tname})\n"  

Monday, October 12, 2009

Virtual Models and Parsing in Ruby

Ever want to process something external to your database or your rails app, such
as the contents of files. But you think rails is only good for database items?
Actually rails can easily adapt to handle any kind of data.

In these examples, I process all my controllers, to give a view and a RESTful interface
for testing of my controllers. I also do the same for the menus. All without a database
table in sight.

First, lets take a look at the controllers "model".

This allows us to view our controllers from within rails, useful if you want to know what code is running, or
in my case to use watir to test. The lovely part is I can actually RESTfully pull the list of controllers, and make
sure each is hit in watir.

The next problem is menu's, how do I validate that each menu does something, a little of the same
technique. Here is a bit more advanced, we actually build a parent menu table, and a child menu item table.

Here is the menu model. The Menu Model actually creates all the child menu items at startup. It does this by
parsing tabnav/widget format menus. While I was looking at a way of using erb to do this for me, I ended up just
doing the parsing myself.

And finally the MenuItem Model

After this I do simple activescaffold based controllers, and more importantly add routes so I can pull xml files
for testing.

Friday, August 21, 2009

ActiveJquery - Without git

If you pull the files, and install them without using rails plugin code, there
is a "install" script that copies things into the right place.

The file will be in: #{projectroot}/vendor/plugins/activejquery

So for me, its in the following directory:

As you can see it, actually copies a few files into the right place in your project.

directory = File.dirname(__FILE__)
copy_files("/public/css", "/public/css", directory)
copy_files("/public/javascripts", "/public/javascripts", directory)
copy_files("/app/views/activejquery", "/app/views/activejquery", directory)

I love to help, so if you have questions, feel free to email me.
or if your commenting, please give me your email, as when you comment on the
blog, the system does not give me a email to respond.

Monday, June 15, 2009

Call Center Voice Recording

Ever wonder how you do audit in a large call center?

Here's the basics:

First, most modern call centers are using Voip. It makes it easy to setup, and even better
makes it easy to monitor. Also call-centers tend to be in places like india, or phillipines
serving the US or Europe markets. So VoIp plays a role in getting the traffic from one
country to another.

There are several pieces to this puzzle:

First, you need to capture the raw data:
So to do that, we need a "tee" off the ethernet channel that is connecting to the VoIP gateway.
This will allow us to see all the VoIP IP Traffic that is going back and forth.

It's sometimes called 'port mirroring', 'port monitoring', 'Roving Analysis' (3Com), or 'Switched Port Analyzer' or 'SPAN' (Cisco).

Now that we have traffic, a good way to "capture" the voice is Oreka

Oreka lets you capture the packets, and convert them to audio files. It actually consists
of three parts.

- OrkAudio: This is the workhorse that processes the calls and does the actual recording
- OrkWeb: This is the XML based Web U/I to access and manage the system
- OrkTrack: This is the master database (MySQL) that records the call records, metadata, etc.

Oreka lets you get a basic system up quickly.

The next issue, is setting up a system of audit, and to connect and understand the calls into transactions,
customers, call-agents, and supervisors.

For a good GUI on top, I'd switch to a Rails App. This lets me do several things:

1. I can tie agents to calls
2. I can allow agents access to review there own calls
3. I can allow supervisors to view/listen to calls for agents reporting to them
4. I can let audit department audit calls as appriate.
5. Ease of interfacing existing call center apps togeather

Now that we have a phase 1, and phase 2 completed, lets add a phase 3.
One of the emerging technologies is Speach to text.
Using the latest SDK's from Dragon Speaking, we can get very high 99% plus on our agents,
and not bad recognition on our clients/customers calling in. This will allow audit to search
for key words, such as scam, theft, etc that would audit to be better aware of things going

The idea way of implementing the whole system is break things down to separate VM's in a VmWare system.

1. OakCapture - 1 VM per major Trunk (One Per - Multiple DS1, One DS3)
2. OakGui - 1 VM For system - For reference/admin control only
3. Database - In a call center, a Oracle or DB2 may be better suited, depending on the company
4. VoiceCenterManager - A Rails Server using Nginx/Passenger that provides interfaces to agents, supervisors, audit,
and managers. Number of VM determine by staff size
5. Recognition Engines - A separate VM that is running a Ruby Agent tied to a Dragon Speaking SDK allowing speaker dependent and Speaker Independent Speech to text conversion. This allow for the creation of easily searchable as well
as readable results of the calls in text form. Using a rack of blades, and VmWare we can realtime convert all calls to text
index them, and make them searchable.

Note that each call center is different but leveraging Rails, and Ruby a AGILE and fast solution can be combined that
gives the best in Audit and accountability.

Thursday, June 11, 2009

Ghosting NTFS/Windows using Knoxppix

I have a USB 1 Gig stick that is on my key ring.
Its really handy, as I have a complete copy of Knoxppix.
Knoxpix is a Linux Distribution designed to boot and run
from CDROM or DVD. It can also run from a handy USB stick.
With the right options it can also in "RAM", which is handly if you want to
run a few parallel operations, and only need one usb stick.

While you can use clonezilla, if you want something more packaged,
but in a shop that is windows centric, ghosting to a windows server
may make more sense.

Boot your usb stick, then at the bash shell, mount your server:

mount -t cifs //server-name/share-name /mnt/cifs -o username=shareuser,password=sharepassword,domain=nixcraft

On the share, I usually will the have my ntfscloning scripts

On a dell machine, there is really only one partition that has "data",

So to the "dobackup" script is very trival.
This takes the OS partition, in most of the environments I was working in this
would have been /dev/sda2 and uses ntfsclone, which takes only the in-use
sectors, compresses them, and put them in a file on the server.

dobackup script

echo backup $1 to $2
mkdir $2
ntfsclone -s -o - /dev/$1 | gzip > $2/$1.ntfsclone.gzip

The reverse, to restore is:

dorestore script

echo restore $1 to $2
cd $1
cat $2.ntfsclone.gzip | gunzip - | ntfsclone --restore-image --overwrite /dev/$2 -

The only other madatory item is the partition table:

doptablebackup script

echo partion backup $1 to $2
sfdisk -d /dev/$1 > $2/ptable.doc
dd if=/dev/$1 bs=512 count=1 of=$2/ptable.sector

doptablerestore script

echo partion restore $1 to $2
dd of=/dev/$2 bs=512 count=1 if=$1/ptable.sector

Sunday, June 07, 2009

Dynamically Generating Javascript using Parameters

I love re-using code, and really believe in DRY.
In doing ActiveJquery, I want my grids to be very re-usable.
For one, I need to use multiple grid/sub-grids to handle Rails
relationships, second I need to be able to have multiple grids
in one page. And want to do all of this with the least amount of code.

In ActiveJquery, the Grid code, is generated on the fly. While it looks like a normal
call, its actually hitting controller code tucked inside the plugin.

So when we request the javascript code for a particular controller, the ActiveJquery
code generates the appropriate javascript on the fly.

In doing, the sub-table, I need to have parameters passed in, so I rely on standard
Rails Semantics when handling HTML.

<script src="location.js?subof=Company&div=Company_Location" type="text/javascript"></script>

This technique lets me generate customized javascript on the fly, making the generator re-usable, and
the grid re-usable any number of times in the same application.

Saturday, June 06, 2009

Finding ActiveRecord Associations

Im working away on ActiveJquery, and the next thing on the list, is to add support for relationships. This gets to be really interesting, because your Inside the box, in a plugin, and you need to know what associations the user has

I was scanning the documentation, and looking around for the api to find the information.

After searching for a while, I discovered railway, a gem for rails that does diagraming of rails models using dot. So grab the gem, and look thru the source.

So the key to finding associations is:

@associations = table.reflect_on_all_associations

This results in:
Company(id: integer, name: string, location_id: integer, created_at: datetime, updated_at: datetime),
Company(id: integer, name: string, location_id: integer, created_at: datetime, updated_at: datetime),
Company(id: integer, name: string, location_id: integer, created_at: datetime, updated_at: datetime),

From my models:
class Company < ActiveRecord::Base
has_many :user
has_many :location
has_many :division

class Department < ActiveRecord::Base
belongs_to :division
class Division < ActiveRecord::Base
belongs_to :company
class Location < ActiveRecord::Base
class User < ActiveRecord::Base

Thursday, May 14, 2009

ruby script/plugin git does not work in Rails 2.3.2 and Ruby 1.9

Lots of people are scratching there head over why they cannot
install rails plugins using git.

I'm working with Ruby 1.9.1 and Rails 2.3.2. And git is a pretty natural
for working with version control.

So I wanted to pull from my own respository, but kept getting
a error on tryping to do the install.

sin-gwest-laptop:testjq gwest$ ruby script/plugin --verbose install git://
Plugins will be installed using http
Plugin not found: ["git://"]
#<TypeError: can't convert Array into String>

Come to find out, this is a known but as the mkdir_p call has changed its
return parameter in Ruby 1.9 and that messes up the install of git plugins
unless you do the patch

ActiveJquery Goes to version .011

I've updated ActiveJquery on github to .011

That was fun. I thought I'd put the authtoken issue to bed.
On doing test for delete and add, found I still was getting authtoken
issues. So I moved from using editData jqgrid parameters to adding
it to the editurl.

Also found a bug in the page based xml pull of the controller component.
This would result in you not seeing the last 10 records. (You notice that
when your adding records and they dont show up).

ActiveJquery consists of:

1. active_jquery               - The controller plugin
2. active_jquery_runtime - Generates the dynamic jqgrid javascript
3. Dynamic Javascript      - The code that runs in the browser
4. jQuery/Jquery UI/jqGrid

I've also combined all the needed public css and javascript into the plugin.

Need to add a rake task and init that makes sure they get copied.

ActiveJquery Reaches Version .010

ActiveJquery, which is designed to integrate Jqgrid, Jquery UI into rails, is
now at Version 0.010.

It now is a Rails 2.3.3 Controller Plugin. You can invoke it in a single
line in your controller.

The plugin automatically generates javascript for the grid, based on your table,
with inline editing. As well as a full REST server to serve the data to the browswer.

I've implemented the ForgeryProtection on Posts, and that is working. Seems like a bit of comfusion in BLogSpere about weather the URI Components need to be encoded or not. Least in Ruby 1.9.1 and Rails 2.3.2 They do NOT need to be encoded.

1. Allow Customizations
2. Allow Relationships and SubTables
3. JQuery UI Menus

Test and More TEst

Also I will implement a rails DEMO app that is based on data from the JQGRID site.

I'll make a separate git repository for the demo.

Wednesday, April 29, 2009

ActiveJquery - Status

Current Things That Are Working:

1. Can read full table
2. Can read using JqGrid Pager
3. Added Sort support, so server will honor the grid sort request
4. Added Delete,Add and Update support.
5. Auto Generates JqGrid Javascript from ActiveRecord
6 Added Total Records to XML so Pager works properly.

ActiveJquery Library/Client
1. Added support for string, integer, date.
2. Generates JSON Based Reader compatible with Rails JSON Format
3. Uses Humanize to handle automatic column names.
4. Uses JqueryUI for theming
5. InLine Edit Support

Things to Do:
1. Paste Controller code into prepared plugin
2. Add static data support
3. Add a bit of DSL(Domain Specific Language) to allow easy configuration
4. Add Master/Detail Support
5. Add Date Picker Plugin
6. Add Parent Table Dynamic Data Selector

ActiveJquery - Features

ActiveJquery is a Rails Plugin that combines the goodness of Jquery, Jquery UI, and JqGrid.

RESTActiveJquery breaks your GUI into a javascript REST Client, and a Rails based REST JSON Server. This reduces
the overhead of our web app, and gives you better expandability, and better response time
DRYTired of repeating yourself, dont. ActiveJquery will find your Field Name, and configure the Grid for you.
CustomizableYou can configure grids in countless ways, allowing you easy control over what you want to see.
MultiGridNeed a butch of grids on one page? Not a problem. You can have any number of grids in one page. Each is named automatically for you.
Master/DetailYou Rails Associations are used to generate sub-grids or master-detail views of your data.
ThemingActiveJquery uses Jquery UI themes, so you can easily customize the look and color scheme of the grid
Menu's/TabsWant to have a easy to use menu system. ActiveJquery supports JqueryUI tabs, so you can do complex, but
easy to use apps.
Dynamic DataBy default data is read a screenful at a time, using the grid pager. This reduces the need to read large amounts of a big table. Also makes loading multiple grids very fast.
Static DataActiveJavascript can embed the table data directly in the javascript. This is great for tables that 1000 rows or less, and are read-only.

Jdgrid Pager Problem

Trying out the latest versions of jquery/jquery ui/jqgrid, and just cannot get the formatting right.

Lets see if we can figure out whats going on.

The html for the grid:
<title> Airstate </title>
<h2> Airstate </h2>
<script src="javascripts/jquery-1.3.2.min.js" type="text/javascript"></script>
<script src="javascripts/jquery-ui-1.7.1.custom.min.js" type="text/javascript"></script>
<script src="javascripts/jquery.layout.js" text="text/javascript"></script>
<script src="javascripts/jqModal.js" type="text/javascript"></script>
<script src="javascripts/jqDnR.js" type="text/javascript"></script>
<script src="javascripts/jquery.jqGrid.js" type="text/javascript"></script>
<link rel="stylesheet" type="text/css" media="screen" href="themes/redmond/jquery-ui-1.7.1.custom.css" />
<link rel="stylesheet" type="text/css" media="screen" href="themes/ui.jqgrid.css" />
html, body {
margin: 0; /* Remove body margin/padding */
padding: 0;
overflow: hidden; /* Remove scroll bars on browser window */
font-size: 75%;
/*Splitter style */

#LeftPane {
/* optional, initial splitbar position */
overflow: auto;
* Right-side element of the splitter.

#RightPane {
padding: 2px;
overflow: auto;
.ui-tabs-nav li {position: relative;}
.ui-tabs-selected a span {padding-right: 10px;}
.ui-tabs-close {display: none;position: absolute;top: 3px;right: 0px;z-index: 800;width: 16px;height: 14px;font-size: 10px; font-style: normal;cursor: pointer;}
.ui-tabs-selected .ui-tabs-close {display: block;}
.ui-layout-west .ui-jqgrid tr.jqgrow td { border-bottom: 0px none;}
.ui-datepicker {z-index:1200;}

<script type="text/javascript">
var listlastsel;jQuery(document).ready(function(){
datatype: 'xml',
mtype: 'GET',
colNames:['Id','Created on','Whom','Logmessage'],
colModel :[
{name: 'id',index:'id',key:true,width:80,align:'right'},
{name: 'created_on',index:'created_on',key:false,width:90},
{name: 'whom',index:'whom',key:false,width:300,align:'left',editable:true},
{name: 'logmessage',index:'logmessage',key:false,width:300,align:'left',editable:true}],
pager: jQuery('#list-pager'),
onSelectRow: function(id){
if(id && id!==listlastsel){
autowidth: true,
sortname: 'id',
sortorder: "desc",
viewrecords: true,
imgpath: 'themes/basic/images',
caption: 'Airstate',
xmlReader: {root: "root",
row: "syslog",
jQuery("#list").navGrid('#list-pager',{ edit:true,add:true,del:true,search:true });
<div id="list-pager" class="scroll"></div>
<table id="list" class="scroll"></table>


Now lets see if we can figure out whats going on.

Ok, Great support from jdquery. Seems like its a a bug in the latest v2 alfa test.
Alfa 3 should solve the problem.

Tuesday, April 07, 2009

Getting only the columns you want in Rails

A thing that is often missed when your doing a web service or a rails app is selecting only
the columns you need.

A traditional find:

user = User.find :all
render :xml => user.to_xml

Will give you the bloat of the whole user table.
If you only hae a few small columns then thats not so bad,
but I've seen legacy apps that have very large number of columns,
so it makes since then to control this better.

user = User.find(:all, :select = 'email')
render :xml => user

This also will make your resulting xml file alot more managable.
And since bandwidth is not free, it will help you handle more users
for less money.

Network Monitor as a Windows Service in Ruby

In reading thru the Ruby Google Group, there seems to be a lot of misconceptions about what should be done inside
of a Rails process and what should be done outside it. Generally if it take any time at all, you should be it outside
the Rails Web Server. In windows you can do a Windows Service, in Linux/Unix you would do a daemon.

This is a example of a windows service that scans cisco switches, finds new nodes, and keep track of there mac
address and there port. Even builds a network map.

The app consists of a Rails app to display the data, and take in configuration information. Which is put in a database,
such as sqlite3, and updated by this Windows Service.

As long as the Service is running, the data will be updated. If I was doing it again, I'd use rufus-scheduler to
wrap the scanning and the mapping.

require 'rubygems'
require 'win32/daemon'
include Win32
require 'logger'
require 'win32/process'
require 'net/ping'
#Note that most of your requires need to go in the service init

class IPAddr
def succ()
return self.clone.set(@addr + 1)

class AirstateDiscovery

def doscan()

def ProcessDiscovery()
Mac.find(:all).each do |mymac|
mydevice = mymac.device
if mydevice
mynet = Network.find_network_by_ip(mymac.ip)
if mynet #Ok we have a valid network

def ProcessSwitches()
Switch.find(:all).each do |myswitch|
if myswitch.enabled
if myswitch.switch_type == "cisco"
cisco =
# end # Rescue
cisco = nil
end #if myswitch.switch_type == "cisco"
end # myswitch.enabled
thegraph =
thegraph = nil
end # find do

def ProcessNetworks()

# Brute Force Scan
Network.find(:all).each do |mynet|
ProcessSwitches() # Networks is "long" running, Switch is "fast"
if mynet.enable
startip = mynet.ip_start
endip = mynet.ip_end
ip =
while ip.to_s != endip
ip = ip.succ()
end # def Process Networks

end # class AirStateDiscovery

class Daemon
def service_init

def service_main

require 'ipaddr'
require File.dirname(__FILE__) + '/../config/environment.rb'
require 'lib/cisco_phone'
require 'lib/managepc'
require 'lib/switchgraph'
@mywork =
mylog =
mylog.whom = "AirStateDiscovery"
mylog.logmessage = "Discovery Starting"
while running?
sleep 3

# Test Code
# require 'ipaddr'
# Dir.chdir("\\projects\\airstate")
# require File.dirname(__FILE__) + '/../config/environment.rb'
# require 'lib/managepc'
# require 'lib/cisco'
# require 'lib/switchgraph'
# require 'lib/cisco_phone'
# @mywork =
# while 1
# @mywork.doscan()
# end
# exit


Sunday, April 05, 2009

Rails - Useful Plugins

In updating myself I found some really useful Rals plugins for my next project.

Hoptoad Notifier
A Rails app is a very dynamic thing. Finding out when your users have errors, and what the pattern of errors is a must have. The traditional approach is to view the log files from time to time, or use a notifier that sends the errors to your email. Then go thru it as they come in. First, it not uncommon for a error to repeat. So you get log of junk in your email. A better approach is to use the hoptoad service/plugin. This way, you get nice reporting and analysis, and for a single project, its free. It consists of a hosted site web gui, and a rails plugin, and a optional local mac gui to show your croak's, ie the errors in your apps.

Simple Ruby Rest Client
REST is a lovely way of doing Remote Procedure Calls. It the rails way of creating web services. Sometime its handy to have a simple REST-client.
There is a simple to Rest client done by Adam Wiggins

Now you can call a REST api even from IRC or rails console

Saturday, March 21, 2009

C Code - Bucket Based Allocator

In realtime systems, malloc/free/garbage collection can take a very long time.

For a real-time system I'm doing, I want to have very fast response time, I call malloc/free often.
Traditional malloc/free will fragment memory quickly, and garbage collection will just take

The solution I've used many times is a bucket allocator, setting pre-defined sizes of memory, and putting them in a queue, or in this case a "bucket". The idea being find the bucket, pull out a entry and return it.

Also something I've found useful is to keep track of busy memory elements as well. That way if I suspect that memory is getting corrupted, I can easily add a tag at the end of the allocation, and run thru the busy memory looking for the corruption. Very useful in debug.

#include <stdio.h>
#include <stdlib.h>
#include "include/queue.h"

struct bucket_struct {
struct entry_struct entry;
struct queue_struct free_q;
struct queue_struct busy_q;
int size;

struct alloc_struct {
struct entry_struct entry;
struct bucket_struct *bucket;

struct queue_struct buckets;

int gkmalloc_init_needed = 1;

void gkmalloc_newbucket(thesize)
int thesize;
struct bucket_struct *mybucket;

mybucket = malloc(sizeof(struct bucket_struct));
mybucket->free_q.head = NULL;
mybucket->free_q.tail = NULL;
mybucket->busy_q.head = NULL;
mybucket->busy_q.tail = NULL;
mybucket-> = NULL;
mybucket->entry.prev = NULL;
mybucket->size = thesize;
queue(&buckets, &mybucket->entry);

void gkmalloc_init(void)
gkmalloc_init_needed = 0;
buckets.head = NULL;
buckets.tail = NULL;
gkmalloc_newbucket(12000000); // Big Buf for jpegs

char *gkmalloc(thesize)
int thesize;
struct bucket_struct *mybucket;
struct alloc_struct *myalloc;
void *ptr;

if (thesize == 0)
if (gkmalloc_init_needed){
gkmalloc_init_needed = 0;
mybucket = (struct bucket_struct *)buckets.head;
while(mybucket->size < thesize){
if (mybucket == NULL){ // Should never happen
gkfatal("gkmalloc: Allocation bigger than max bucket");
mybucket = (struct bucket_struct *)mybucket->;
myalloc = (struct alloc_struct *)unqueue(&mybucket->free_q);
if (myalloc == NULL){ // The allocation queue is empty
// Dynamically allocate from the system
myalloc = malloc(sizeof(struct alloc_struct) + mybucket->size);
if (myalloc == NULL){
gkfatal("gkmalloc: Cannot malloc new buffer");
myalloc->bucket = mybucket;

queue(&mybucket->busy_q, &myalloc->entry);
ptr = (void *)(myalloc + 1); // Add the size of the header to get Memory After

void gkfree(ptr)
void *ptr;
struct bucket_struct *mybucket;
struct alloc_struct *myalloc;

if (ptr == NULL){
gkfatal("gkfree: Bad Free - Null Ptr Passed");
myalloc = (struct alloc_struct *)ptr;
myalloc = myalloc - 1; // Go back to the header info
mybucket = myalloc->bucket;
if (mybucket == NULL){
gkfatal("gkfree: Bad Bucket");

Wednesday, March 18, 2009

C Code - Link List or Queues

One of my favorite real-time system tricks is to use queues, or linked-list
to handle message passing. Then the whole system can be lots of small co-routines, without the overhead of task-switching. Small and tight code really can run fast. Usually the problem you get with this is malloc/free overhead.

A pre-allocated chunk based system solves that. I'll post it to a future blog.
## queue.c
#include <stdio.h>
#include <stdlib.h>

#include "include/queue.h"
** Queue.c

struct entry_struct *unqueue(q)
struct queue_struct *q;
struct entry_struct *entry;

if (q==NULL) return(NULL);
if (q->head == NULL) return(NULL);
entry = q->head;
q->head = entry->next;
if (q->head == NULL) q->tail = NULL;
entry->next = NULL;
entry->prev = NULL;

void queue(q,entry)
struct queue_struct *q;
struct entry_struct *entry;
if (q==NULL) return;
if (entry == NULL) return;
entry->next = NULL;
entry->prev = NULL;
if (q->head == NULL){
q->head = entry;
q->tail = entry;
entry->prev = q->tail;
q->tail->next = entry;
q->tail = entry;

struct queue_struct *init_queue(void)
struct queue_struct *q;

q = (struct queue_struct *)malloc(sizeof(struct queue_struct *));
q->head = NULL;
q->tail = NULL;

## queue.h
** Queue.h

struct entry_struct {
struct entry_struct *next;
struct entry_struct *prev;

struct queue_struct {
struct entry_struct *head;
struct entry_struct *tail;

struct entry_struct *unqueue(struct queue_struct *);
void queue(struct queue_struct *, struct entry_struct *);
struct queue_struct *init_queue(void);