Computer Magic
Software Design Just For You
 
 

When free isn’t free – enforcement of the GPL just because

December 15th, 2009

All I can say is wow. I read this article http://www.computerworld.com/s/article/9142262/Multiple_consumer_electronics_companies_hit_with_GPL_lawsuit today and I am unfortunately not surprised. This is a perfect example of why many developers shy away from the GPL license when distributing their open source code.

First, I want to say I am a big fan of Open Source. I have used it extensively and even contributed in my own way over the years. The concept that code can be free and the fruits of some one’s labor can be shared is very noble. But I have never understood the idea that you will give your code away for free and then put major restrictions on it. Is it free or not?

How is it that an individual that feels so strongly that code should be freely available and who encourage others to use it and contribute to it would throw a tantrum when some one actually uses it. If you are so worried about some one “ripping off” your free code, then don’t release it for free!

I hear a lot of talk about empowerment and sharing. It is time for people to quit preaching and start walking the walk. Until then, it isn’t free and open is it?

As a side note, the article states that the BusyBox code has most likely not even been modified. I am all for giving back to the community, but legal action because you don’t distribute something that is readily available from the source? Seriously?

Recovering from a compressed ZFS root pool

September 24th, 2009

Opensolaris and ZFS are great, but not without their limitations. A more recent addition is that Solaris can boot from a zpool, but not if you compress it or raidz it. Only single drives or mirrors, and no compression.

A good practice is to setup your root pool for the system, then setup a separate data pool for your files. Since we are using Solaris in this instance as a Virtual Machine host, this model works quite well.

Compression has some advantages. Aside from saving hard drive space, the compressed information means fewer IO writes and reads which can increase performance under the right conditions. Too often though, the warning to NOT compress the root pool is not present when you read the ZFS literature. Don’t compress your root pool!

For those who have done it, you may not realize your mistake until it is too late. If you turn on compression, it doesn’t compress existing files. Compression settings ONLY take effect when a file is written. That also means that turning compression off is not enough if you need to recover your boot drive.

If you have compression turned on and rewrite your boot files, you will end up with an error message on boot indicating file system corruption. Note the following instructions assume root access, add pfexec in front of each command or su – (opensolaris) as appropriate.

To recover, you will need to do the following:
1) Boot from the live Solaris CD
2) Open a terminal
3) Find your ZFS Pools: zpool list
4) Import your ZFS Pool (Only need the root pool, mine is called rpool): zpool import rpool
5) Turn off compression: zfs set compression=off rpool
6) List your mount points (you need the current mount under ROOT, mine is – rpool/ROOT/opensolaris – take note of the mount point, should be / ): zfs list
7) Setup a folder to mount to: mkdir /mnt/root
8 ) Change the mount point on you rpool: zfs set mountpoint=/mnt/root rpool/ROOT/opensolaris
9) Mount your ROOT: zfs mount rpool/ROOT/opensolaris
10) Move to your root pool: cd /mnt/root
11) Copy each system folder. You NEED to copy it so that a NEW copy is made. With compression off, it will write the new copy uncompressed. Moving it isn’t enough. Copy each folder, then move it back to the original name. Here is the command for the boot folder (notice we copy to _tmp, then move it back over the original): cp -R boot boot_tmp; mv boot_tmp boot
Copy each of these system folders in the /mnt/root folder:
– boot
– kernel
– system/objects
– etc
– platform
12) Now you have uncompressed copies of your system folders, unmount the file system: zfs unmount rpool/ROOT/opensolaris
13) Make sure to close the zpool so there is no corruptoin: zpool export rpool
14) Reboot – type the reboot command and remove the live CD, you should now be able to boot into your Solaris install with an uncompressed root pool.

NOTE: If you realize you compressed things before you reboot, you can do this all before you reboot and save yourself the trouble of mounting the live CD. The system/objects folder will not be able to copy some stuff, but you should be able to reboot correctly.

Hope this save some one some time. Use at your own risk.

Ray Pulsipher

xVM (xen) poor VMDK file performance with ZFS and SATA drives

September 24th, 2009

We recently setup an xVM server on a new blade running 24Gig ram and 3 500 gig SATA hard drives. We installed Linux as a Paravirtualized client and we were off.

** A quick note, this post is geared towards those who have poor IO performance running xVM, ZFS, and SATA drives. We have an Intel ICH9R controller setup as individual AHCI drives (not using the softraid). Our drives were setup as RaidZ in a ZFS pool.

We noticed pretty quickly that while xVM and the host OS’s were performing very well in the CPU and Memory category, but the IO speed was troubling. Our systems don’t do heavy IO so it was no issue until the nightly backups fired off. We quickly switched the backup to use RSync to cut down on the amount of IO which was really a better solution for us anyway.

While prepping a second similarly configured blade we were able to track down some old threads about xVM, ZFS, and poor performance. It looks like some SATA drivers and ZFS have issues with DMA transfers when more than 3 Gig memory is installed on the server and Xen is enabled. Also, as performance goes, the Dom0 never showed signs of slow IO.

My understanding is that ZFS can use LOTS of memory, and on systems with 4 Gig or more of ram, it will. Some SATA drivers don’t support 64 bit memory addressing, so any data mapped to memory above the 4 Gig mark has to be direct copied rather than using DMA.

The solutions are to limit the amount of Dom0 memory and also tell ZFS to use less memory. Between those two settings, the system runs MUCH better. Some people physically took out memory chips to check this with mixed results. We were able to use kernel parameters to tell the system to limit the Dom0 memory to a few gig. The difference in performance is quite amazing.

I used the following command to benchmark drive writes (make sure to cd to the appropriate zfs pool):
time dd if=/dev/zero bs=128k count=25000 of=tmp.dmp

I also have the following ZFS settings (these settings were in place from the start):
compression=on
recordsize=8k

Before Config Changes
Bare Metal Machine (Dom0) – 24 Gig ram total – about 18 Gig in Dom0
Approx write time for 3.2 Gig file – 19 seconds – 170 MB/s

DomU – Cent OS as ParaVirt – 4 Gig ram
Approx write time for 3.2 Gig file – 14m 8 seconds – 4 MB/s

Yes, 14 minutes, or 20 seconds. That is quite a spread. And yes, I ran the tests multiple times. That explains why the daily backup was blowing chunks. We had to set things up to RSync instead of Tar/Gzip which was a better solution anyway.

After Config Changes
Bare Metal Machine (Dom0) – 24 Gig Ram Total – Only 2 Gig allocated to Dom0 (kernel parameter)
Approx write time for 3.2 Gig file – 14 seconds – 234 Mb/s

DomU – Cent OS as ParaVirt – 4 Gig Ram
Approx write time for 3.2 Gig file – 19 to 28 seconds – 184 to 115 MB/s

The DomU system is running close to the previous native speed. The Dom0 system picked up speed. The difference is dramatic to say the least.

This works great for us as our Dom0’s are only acting as virtual hosts and doing nothing else. With 2 Gig ram, we still had 1 Gig free, and no swap file usage. We did notice less performance when setting the Dom0 to 1 Gig ram, but didn’t see any performance improvement when setting Dom0 to 3 Gig ram. Note that we are adjusting the Dom0 with a kernel parameter and our machine still has 24 Gig ram to use for virtual machines. Also note that you have to reboot the Dom0 to make these changes take effect.

How to setup your Dom0

AHCI Drivers – You may need to check that you system is using AHCI drivers
-> prtconf -v |grep SATA
You should see something like
‘SATA AHCI 1.0 Interface’

Set the Dom0 to use 2 Gig ram
Note – this has to happen on reboot, using the virtsh setmaxmem command doesn’t seem to fix performance.
My system boots form /rpool
-> pfexec nano /rpool/boot/grub/menu.lst
Add dom0_mem=2048M to the kernel line
-> kernel$ /boot/$ISADIR/xen.gz dom0_mem=2048M
Some people suggest pinning your Dom0 to the first CPU or two and pinning your VM’s to other CPU’s. I didn’t, that is up to you. Here is how you would pin Dom0 to CPU 1 and 2 (core 1 and 2, not physical cpus)
-> kernel$ /boot/$ISADIR/xen.gz dom0_mem=2048M dom0_max_vcpus=2 dom0_vcpus_pin=true

Limit ZFS Memory
You want to set ZFS to use less memory.
-> pfexec nano /etc/system
Go to the bottom of the file and add:
-> set zfs:zfs_arc_max = 0x10000000

Prevent Auto-Ballon
ZFS doesn’t like the system memory changing. In xVM, the default is for Dom0 to have all system memory and when VM’s start, they take some from Dom0. This causes the amount of memory available to change. We set the max memory in the kernel parameter. Now we set the minimum memory so Dom0 will always have 2 Gig.
-> pfexec svccfg -s xvm/xend setprop config/dom0-min-mem=2048

Testing Write Times
Make sure to benchmark before and after to verify your results. Also note that I used Cent OS as the guest installed as a Para Virtual guest, not an HVM guest.
-> time dd if=/dev/zero bs=128k count=25000 of=tmp.dmp

Hope this saves you some time.

Ray Pulsipher

Dynamic data layout with vertical tables

April 25th, 2009

Developers today are faced with poorly defined requests and constantly changing requirements. It is not uncommon for a developer to spend countless meetings and design sessions getting the database and object layout just right, only to have the client request just one more field when the first beta application is delivered.

That one more field requires you to change your database structure, add extra code to your object, adjust your database queries, and add additional form elements to your web site or application. This can be very tedious and often introduces bugs.

Often times, these new fields are simply that, new fields. While they do require basic validation and application plumbing to accommodate them, more often than not they do not require extra business logic (e.g. adding a second phone number field for a client list).

With the concept of vertical tables, you can avoid much of this overhead. In addition, vertical tables can help accommodate new features later.

What is a vertical table?
Normally you store data in a database table like so:
Name – Phone Number
Bob – 555-5555
Frank – 333-3333

You have one entry per row with 1 field for each piece of data you want to hold. When you decide you need to hold a work number also, you would add a new field.
Name – Phone Number – Work Phone
Bob – 555-5555 – 111-1111
Frank – 333-3333 – 222-2222

This would require you to adjust your code in the application and the corresponding database queries. Even if the work number field isn’t displayed on the form (maybe you have a new part of your application that assigns that?) you still run the risk of breaking previously written code due to the change in table structure.

A vertical table changes things around. You now get multiple rows for one person. For example.
Name – Key – Value
bob – name – bob
bob – phone_number – 555-5555
frank – name – frank
frank – phone_number – 111-1111

You add a new row for each value you want to store about bob. The name in this case would be the key to knowing if the data belonged to bob or frank. To add a new “field” in this case, you would add a new row.
bob – work_number – 333-3333
frank – work_number – 777-7777

The field “Key” becomes your field name. The field “Value” becomes your actual data. You can now add as many bits of information as you want about the person without having to change the structure of your table.

It changes the way you code
For this to work, you need to change the way you think and code. Often web programmers will do raw SQL queries and output the results directly to a web page. This layout makes that difficult. You will need some kind of intermediate object to handle the loading and saving of values for the table. This makes it no longer a simple CRUD application. Instead, you create objects that will persist to the database rather than working with raw data stored in the database.

Given the table of people above, in PHP we would create an object that has a hash array of values. In PHP they are also called associative arrays. This hash array would use the key field as the hash, and the value field as the value of the array. To load a user object, you would give it the name (bob) and tell it to load all values for bob. The object would loop through all values in the database and load them into the array. This way, as new values were stored in the database, the object would automatically load them. You aren’t required to create static properties for each value, just read the array which means adding new fields requires no change to the middle tier object definition.

When saving, the same thing occurs. We loop through the array and save out each value to the database. In our own code, we also keep track of “changed” values and only save the ones marked as changed to minimize re-writing information that hasn’t been modified. Also, keep in mind that in most code, you won’t change values very much, so often it is just a read operation. In any case, this gives us automatic object persistence as long as we put all the values we want to keep in the array.

Further, we store loaded objects in sessions where appropriate so that we don’t have to reload the object on each page view. This means refreshing the current page results in fewer database queries.

Here is an example of the people class:
edfa
class Person {
// The array to hold our values
var $values = array();

function LoadPerson($name) {
// Load the person
$SQL = "SELECT * FROM table_people WHERE name='$name'";
$rs = mysql_query($SQL);
while ($record = mysql_fetch_assoc($rs)) {
$this->array[$record["key"]] = $record["value"];
}
}

function GetValue($value_name, $default="") {
if (isset($this->array[$value_name])) {
// This value exists, return it
return $this->array[$value_name];
}
// Value does not exist, return the default value
return $default;
}
}

This code is super simple, but this is enough features to illustrate the point.

$person = new Person();
$person->LoadPerson("bob");
print_r($person->values);

Run that, then add more values to your database and run that again. It will automatically load your new values.

The real change is in how you think about databases. It is easy to think of databases as “THE” data. This methodology is supported by the concept of CRUD applications, where forms are just windows to the database table with validators. Vertical tables requires you have a different perspective on data storage.

In short, this method is a good way to make your objects persistent using a database and build your application based on your objects. You create objects to work with, not “data”.

For example, applications that store data directly in a file only use that file to save data and load it as needed, not to directly run reports etc… This method works well if you think of the database as a file. It just so happens that in this case the file is indexed for super speed and can remove your file format and parsing code in favor of SQL calls.

Automatic Persistence
We used a constructor and destructor to auto load and auto save values. Using this method you can store data for most of your structures. If you setup a parent class that has basic loading/saving/etc built in, you can have persistent data structures very easily by having your objects extend that class. Wouldn’t it be nice to not have to re-debug your loading and saving code each time you create a new class?

Extending your structures
Writing your GUI code to use these objects allows you to divorce your UI code from the underlying database structure. This means adding new values won’t mess up your existing code as easily. This also means that as you want to extend your application, that you can hook into the objects and store new data just by adding it to the array.

For example. If you decided to add e-mail alerts to your web site, the alert GUI code could save all its data in the Person object just by setting new values. Bob could now login and add e-mail addresses to the alert form. The alert code won’t need its own alert table. Just make sure the key names are unique so you don’t overwrite any data.

It is faster than you think, and wastes less space than you might imagine
First – It can be quite fast. You need to handle it properly in your code. If you limit your round trips to the database, it can be as fast as grabbing single rows from a normal table. You can do one query per user to load information (see example above). This works well for circumstances where you are only loading information for a few users at a time. If you are loading them in bulk, you will want to re-think your loading technique, but even then this could be optimized. Think about one query and have a loading function where you can pass the object a record rather than have each object make it’s own query.

As for space, yes it will take more space than a normal table. But more space is often not an issue if you are working with smaller data sets. Should Google build its search index this way? I wouldn’t recommend it. But for smaller web sites, database storage space generally isn’t your problem. Web sites getting > 10,000,000 hits a month and having several hundred thousand records can utilize this method (yes, we have used this technique in real world scenarios on sites with some real traffic). If your data storage is 200 meg, using this method is still possible. In the right circumstances, using this method is feasible for much larger data sets.

You have to decide if you are a minimalist – has to be the most space efficient storage method just because – or if you can spare a few K per item to save yourself some time.

Just make sure to add proper indexing to the fields. MySQL 5.0 is very sensitive to proper indexing. Previous versions did a better job of guessing and still running fast. 5.0 is wicked fast, but you need to be explicit with what is indexed and how. Index all fields except for the Value field, and maybe that too if you ware finding information stored there.

Advantages of this method
This method takes you away from simple CRUD/Data binding mentality and forces you to create a useful object model to code against. This is part of the idea of N-Tier data access. Too often though N-Tier data models become simply wrapper classes to CRUD style applications. In this case, the objects are a must as simple things can become unwieldy if you are trying to do data queries for everything.

Add new values at any time for any reason. It is nice to be able to just add new values to an object without re-coding the object or the database. This makes adding a new field to a form quite fast as you just need to hook the UI code to the object via the Get and Set methods. For example adding a second email address for a user would require adding the text element and a line to get/set the value. The object at that point would just deal with loading/saving the addition value. No changes to the object and no changes to the database structure.

I can create new classes that are persistent very quickly and have them be solid and stable with minimal effort. This can save many man hours! You can either copy/paste the get/set/load/save methods and setup a new table using the same format, or you could possibly write a base class that has the core functionality and have new classes extend/inherit.

Disadvantages of this method
This is NOT a good method for you data binders out there. The exception to that would be for those who data bind to objects instead. This can be effective. We have used this technique in .net on a few different occasions. Most data binding tools will not bind properly to the vertical tables as they assume that tables are one row per item rather than multiple rows per item.

Be careful with this method. Don’t try it right away on a big project. You need to play with it a bit to understand it and to figure out when it is appropriate. If coded and managed properly, it can be an elegant solution. If not, it can be your worst nightmare. I had some coders re-design some stuff I wrote who didn’t quite grasp what I was doing. Their “enhanced” version of what I made ended up harder to use than before and lost usability and some existing features in order to gain a few features that would require a senior level programmer to utilize. In short, it was harder to understand and work with instead of easier.

Storage space requirements are bigger, but not that much bigger. You will have to decide what your speed and space requirements really are. As with any application, test it! Proper coding techniques will almost always result in more performance improvements than picking the “perfect” database layout.

Conclusion
There are many enhancements that are needed to make this truly useful. We have auto saving, and various helper functions to make the code easier. Ideally you only actually call database queries in the load and save functions to read or store the array. The rest is abstracted from the database. A base class is also not a bad idea so that classes that extend from that class can automatically get the load and save functionality.

Our new tool kit will be using this method to allow us to plug in new features on an existing website. Check it out. The programming library will be released free! http://cmtk.cmagic.biz/. The new website is now live. New examples and information will be added regularly.

Call Of Duty 4 Web Page Server Manager

March 6th, 2009

This was a fun project. I wanted to be able to be able to manage my cod4 game server without all the rcon commands. Also, my friends should be able to restart the server or add bots even if I am not playing. Lastly, I didn’t want to give up security of my server to let them do so (e.g. they can’t start the server with an rcon command, that required logging into the server via remote desktop or using some other method requiring admin privleges).

What I ended up with was a web page that can connect to a cod4 server via its UDP port and run rcon commands. It works pretty good. It uses the same method that other scripts on the web use. I wrote it using C# .net 3.5 with ajax and I think it came out pretty good.

There are some limitations. The player list doesn’t parse perfectly (UDP packets come in out of order) and there are lots of things to add. But as a tool, it is quite usable and works well. I am releasing it for free, so help yourself and best of luck to you. Instructions are on the linked page.

http://cmagic.biz/products/cod4_server_manager/


Home | My Blog | Products | Edumed | About Us | Portfolio | Services | Location | Contact Us | Embedded Python | College Courses | Quick Scan | Web Spy | EZ Auction | Web Hosting
This page has been viewed 867835 times.

Copyright © 2005 Computer Magic And Software Design
(360) 417-6844
computermagic@hotmail.com
computer magic