Home Page
  • April 19, 2024, 10:59:13 am *
  • Welcome, Guest
Please login or register.

Login with username, password and session length
Advanced search  

News:

Official site launch very soon, hurrah!


Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Dakusan

Pages: 1 ... 27 28 [29] 30
421
Posts / Zelda Treasure Flaws
« on: September 28, 2009, 05:31:08 am »
Original post for Zelda Treasure Flaws can be found at https://www.castledragmire.com/Posts/Zelda_Treasure_Flaws.
Originally posted on: 05/06/08

I had meant to write this post back when I beat “Zelda: Twilight Princess” a few days after it and the Nintendo Wii came out in 2006, but never got around to it, and the idea of writing about a game that came out long ago seemed rather antiquated.  The initiative to write this post popped up again though as I just finished replaying “Zelda: Ocarina of Time” (N64).

I have been a really big Zelda fan for a very long time, and have played most of the series.  I got to a GameStop ~8 hours, IIRC, before they started preordering the Wii to make sure I could play Twilight Princess as soon as it came out, as I was very anxious to play it. It was a good thing I did too, because when the Wii actually came out, they were next to impossible to acquire. I knew of many people having to wait in lines well over 15 hours to get one soon after the release, and they were still rarities to attain well over a year later.

While I really enjoyed Twilight Princess, I was very frustrated by a rupee and treasure problem.  “Zelda” (NES) and “Link to the Past” (SNES) had it right.  Whenever you found a secret in those games it was something really worth it, namely, a heart piece (increased your life meter), or often even a new item. Rupees (in game money) were hard earned through slaying enemies, only rarely given in bulk as prizes, and you almost always needed more. As I played through Twilight Princess, I was very frustrated in that almost every secret I found, while hoping for something worth it like a heart pieces, was almost always a mass of rupees. There were at least 50 chests I knew of by the end of the game filled with rupees that I couldn’t acquire because I was almost always maxed out on the amount I could carry. What’s even worse is that the game provided you a means to pretty much directly pinpoint where all heart pieces were. These problems pretty much ruined the enjoyment of the search for secret treasures in the game. You could easily be pointed directly to where all hearts were, new game items were only acquirable as primary dungeon treasures, and the plethora of rupees was next to worthless.

So, as I was replaying Ocarina of Time, I realized how unnecessary rupees were in that game too.  There are really only 2 places in the whole game you need rupees to buy important items; one of which is during your very first task within the first few minutes of the game. The only other use for rupees is for a side quest to buy magic beans which takes up a small chunk of your pocket change through the game, but besides that, there is no point to the money system in the game as you never really need it for anything.  What’s even more a slap in the face is that one of the primary side quests in the game just rewards you with larger coin purses to carry more rupees, which again, you will never even need to use.

While these games are extremely fun, this game design flaw just irks me. Things like this will never stop me from playing new Zelda games however, or even replaying the old ones from time to time, especially my by far favorite, Link to the Past, as they are all excellent works.  I would even call them pieces of art. Miyamoto forever :-).


422
Posts / Secure way of proving IP ownership
« on: September 28, 2009, 05:31:07 am »

So I was thinking of a new project that might be fun, useful, and possibly even turn a little profit, but I was talked out of it by a friend due to the true complexity of the prospect past the programming part. The concept isn’t exactly new by a long shot, but my idea for the implementation is, at least I would like to think, novel.

For a very long time, it has been important to be able to prove, without a doubt, that you have the oldest copy of some IP to prove you are the original creator.  The usual approach to this is storing copies of the IP at a secure location with the storage time recorded.  This is, I am told, very often used in the film industry, as well as many others.

The main downside to this for the subscriber, IMO, is having their IP, which may be confidential, stored by a third party, and entrusting their secrets to an outsider’s security. Of course, if these services are done properly and are ISO certified for non-breachable secure storage, this shouldn’t be a problem as they are probably more secure than anything the user has themselves.  One would like to think, though, that entrusting your IP to no one but yourself is the most secure method.

The out-of-house storage method may also require that there be records accessible by others telling that you stored your IP elsewhere, and that it exists, which you may not want known either. This is not always a problem though because some places allow completely anonymous storage.

A large downside for the provider is having to supply and maintain the medium for the secure storage, whether it be vaults for physical property, or hard drives for virtual property.


My solution to this problem, for virtual property anyways, is to not have the provider permanently store the user’s data at all, but provide a means by which the provider can authenticate a set of the user’s data as being unchanged since a certain date. This would be accomplished by hashing the user’s data against a random salt.  The salt would be determined by the date and would only be known by the provider.


This would work as follows:
  • Every day, the server would create a completely random salt string of a fixed length, probably 128 bits. This random salt would be the only thing the server would need to remember and keep secret.  This process could also be done beforehand for many days or years.
  • As the user uploaded the data through a secure connection, the server would process it in chunks, probably 1MB at a time, through the hash function.
  • The hash function, probably a standard one like MD5, would be slightly modified to multiply the current hash on each block iteration against the daily random salt. The salt would also be updated per upload by a known variable, like multiplying the salt against the upload size, which would be known beforehand, or the exact time of upload.
  • A signature from a public-key certificate of a combined string of the time of upload and the hash would be calculated.
  • The user would be returned a confirmation string, which they would need to keep, that contained the time of upload the signature.

Whenever the user wanted to verify their data, they would just have to resend their data and the confirmation string, and the server would return if it is valid or not.

I was thinking the service would be free for maybe 10MB a day.  Different account tiers with appropriate fees would be available that would give the user 1 month of access and an amount of upload bandwidth credits, with would roll over each month.  Unlimited verifications would also be allowed to account holders, though these uploads would still be applied towards the user’s credits.  Verifications without an account would be a nominal charge.

The only thing keeping me from pursuing this idea is that for it to be truly worth it to the end user’s, the processing site and salt tables would have to be secured and ISO certified as such, which would be a lot more costly and trouble than the initial return would provide, I believe, and I don’t have the money to invest in it right now.


I do need to find one of these normal storage services soon for myself soon though. I’ll post again about this when I do.



[edit on 6/15/08 @ 5:04pm]
Well, this isn’t exactly the same thing, but a lot like it.
http://www.e-timestamp.com

423
Posts / FoxPro Table Memo Corruption
« on: September 28, 2009, 05:31:06 am »
Original post for FoxPro Table Memo Corruption can be found at https://www.castledragmire.com/Posts/FoxPro_Table_Memo_Corruption.
Originally posted on: 06/13/08

My father’s optometric practice has been using an old DOS database called “Eyecare” since the (I believe) early 80’s.  For many years, he has been programming a new, very customized, database up from scratch in Microsoft Access which is backwards compatible with “Eyecare”, which uses a minor variant of FoxPro databases.  I’ve been helping him with minor things on it for a number of years, and more recently I’ve been giving a lot more help in getting it secured and migrated from Microsoft Access databases (.mdb) into MySQL.

A recent problem cropped up in that one of the primary tables started crashing Microsoft Access when it was opened (through a FoxPro ODBC driver). Through some tinkering, he discovered that the memo file (.fpt) for the table was corrupted, as trying to view any memo fields is what crashed Access.  He asked me to see if I could help in recovering the file, which fortunately I can do at my leisure, as he keeps paper backups of everything for just such circumstances. He keeps daily backups of everything too… but for some reason that’s not an option.


I went about trying to recover it through the easiest means first, namely, trying to open and export the database through FoxPro, which only recovered 187 of the ~9000 memo records.  Next, I tried finding a utility online that did the job, and the first one I found that I thought should work was called “FoxFix”, but it failed miserably.  There are a number of other Shareware utilities I could try, but I decided to just see how hard it would be to fix myself first.


I opened the memo file up in a HEX editor, and after some very quick perusing and calculations, it was quite easy to determine the format:

So I continued on the path of seeing what I could do to fix the file.
  • First, I had it jump to the header of each record and just get the record data length, and I very quickly found multiple invalid record lengths.
  • Next, I had it attempt to fix each of these by determining the real length of the memo by searching for the first null terminator (“\0”) character, but I quickly discovered an oddity.  There are weird sections in many of the memo fields in the format BYTE{0,0,0,1,0,0,0,1,x}, which is 2 little endian DWORDS which equal 1, and a final byte character (usually 0).
  • I added to the algorithm to include these as part of a memo record, and many more original memo lengths then agreed with my calculated memo lengths.
  • The final thing I did was determine how many invalid (non keyboard) characters there were in the memo data fields.  There were ~3500 0x8D characters, which were usually always followed by 0xA, so I assume these were supposed to be line breaks (Windows line breaks are denoted by [0xD/new line/\r],[0xA/carriage return/\n]). There were only 5 other invalid characters, so I just changed these to question marks ‘?’.

Unfortunately, Microsoft Access still crashed when I tried to access the comments fields, so I will next try to just recover the data, tie it to its primary keys (which I will need to determine through the table file [.dbf]), and then rebuild the table. I should be making another post when I get around to doing this.


The following code which “fixes” the table’s memo file took about 2 hours to code up.

//Usually included in windows.h
typedef unsigned long DWORD;
typedef unsigned char BYTE;

//Includes
#include <iostream.h> //cout
#include <stdio.h> //file io
#include <conio.h> //getch
#include <ctype.h> //isprint

//Memo file structure
#pragma warning(disable: 4200) //Remove zero-sized array warning
const MemoFileHeadLength=512;
const RecordBlockLength=32; //This is actually found in the header at (WORD*)(Start+6)
struct MemoRecord //Full structure must be padded at end with \0 to RecordBlockLength
{
   DWORD Type; //Type in little endian, 1=Memo
   DWORD Length; //Length in little endian
   BYTE Data[0];
};
#pragma warning(default: 4200)

//Input and output files
const char *InFile="EXAM.Fpt.old", *OutFile="EXAM.Fpt";

//Assembly functions
__forceinline DWORD BSWAP(DWORD n) //Swaps endianness
{
   _asm mov eax,n
   _asm bswap eax
   _asm mov n, eax
   return n;
}

//Main function
void main()
{
   //Read in file
   const FileSize=6966592; //This should actually be found when the file is opened...
   FILE* MyFile=fopen(InFile, "rb");
   BYTE *MyData=new BYTE[FileSize];
   fread(MyData, FileSize, 1, MyFile);
   fclose(MyFile);

   //Start checking file integrity
   DWORD FilePosition=MemoFileHeadLength; //Where we currently are in the file
   DWORD RecordNum=0, BadRecords=0, BadBreaks=0, BadChars=0; //Data Counters
   const DWORD OneInLE=0x01000000; //One in little endian
   while(FilePosition<FileSize) //Loop until EOF
   {
      FilePosition+=sizeof(((MemoRecord*)NULL)->Type); //Advanced passed record type (1=memo)
      DWORD CurRecordLength=BSWAP(*(DWORD*)(MyData+FilePosition)); //Pull in little endian record size
      cout << "Record #" << RecordNum++ << " reports " << CurRecordLength << " characters long. (Starts at offset " << FilePosition << ")" << endl; //Output record information

      //Determine actual record length
      FilePosition+=sizeof(((MemoRecord*)NULL)->Length); //Advanced passed record length
      DWORD RealRecordLength=0; //Actual record length
      while(true)
      {
         for(;MyData[FilePosition+RealRecordLength]!=0 && FilePosition+RealRecordLength<FileSize;RealRecordLength++) //Loop until \0 is encountered
         {
#if 1 //**Check for valid characters might not be needed
            if(!isprint(MyData[FilePosition+RealRecordLength])) //Makes sure all characters are valid
               if(MyData[FilePosition+RealRecordLength]==0x8D) //**0x8D maybe should be in ValidCharacters string? - If 0x8D is encountered, replace with 0xD
               {
                  MyData[FilePosition+RealRecordLength]=0x0D;
                  BadBreaks++;
               }
               else //Otherwise, replace with a "?"
               {
                  MyData[FilePosition+RealRecordLength]='?';
                  BadChars++;
               }
#endif
         }

         //Check for inner record memo - I'm not really sure why these are here as they don't really fit into the proper memo record format.... Format is DWORD(1), DWORD(1), BYTE(0)
         if(((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Type==OneInLE && ((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Length==OneInLE /*&& ((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Data[0]==0*/) //**The last byte seems to be able to be anything, so I removed its check
         { //If inner record memo, current memo must continue
            ((MemoRecord*)(MyData+FilePosition+RealRecordLength))->Data[0]=0; //**This might need to be taken out - Force last byte back to 0
            RealRecordLength+=sizeof(MemoRecord)+1;
         }
         else //Otherwise, current memo is finished
            break;
      }
      if(RealRecordLength!=CurRecordLength) //If given length != found length
      {
         //Tell the user a bad record was found
         cout << "   Real Length=" << RealRecordLength << endl;
         CurRecordLength=RealRecordLength;
         BadRecords++;
         //getch();

         //Update little endian bad record length
         ((MemoRecord*)(MyData+FilePosition-sizeof(MemoRecord)))->Length=BSWAP(RealRecordLength);
      }

      //Move to next record - Each record, including RecordLength is padded to RecordBlockLength
      DWORD RealRecordSize=sizeof(MemoRecord)+CurRecordLength;
      FilePosition+=CurRecordLength+(RealRecordSize%RecordBlockLength==0 ? 0 : RecordBlockLength-RealRecordSize%RecordBlockLength);
   }

   //Tell the user file statistics
   cout << "Total bad records=" << BadRecords << endl << "Total bad breaks=" << BadBreaks << endl << "Total bad chars=" << BadChars << endl;

   //Output fixed data to new file
   MyFile=fopen(OutFile, "wb");
   fwrite(MyData, FileSize, 1, MyFile);
   fclose(MyFile);

   //Cleanup and wait for user keystroke to end
   delete[] MyData;
   getch();
}

424
Posts / Inlining Executable Resources
« on: September 28, 2009, 05:31:05 am »
Original post for Inlining Executable Resources can be found at https://www.castledragmire.com/Posts/Inlining_Executable_Resources.
Originally posted on: 06/07/08

I am somewhat obsessive about file cleanliness, and like to have everything I do well organized with any superfluous files removed.  This especially translates into my source code, and even more so for released source code.

Before I zip up the source code for any  project, I always remove the extraneous workspace compilation files.  These usually include:

  • C/C++: Debug & Release directories, *.ncb, *.plg, *.opt, and *.aps
  • VB: *.vbw
  • .NET: *.suo, *.vbproj.user

Unfortunately, a new offender surfaced in the form of the Hyrulean Productions icon and Signature File for about pages. I did not want to have to have every source release include those 2 extra files, so I did research into inlining them in the resource script (.rc) file.  Resources are just data directly compiled into an executable, and the resource script tells the executable all of these resources and how to compile them in.  All my C projects include a resource script for at least the file version, author information, and Hyrulean Productions icon. Anyways, this turned out to be way more of a pain in the butt than intended.


There are 2 ways to load “raw data” (not a standard format like an icon, bitmap, string table, version information, etc) into a resource script.  The first way is through loading an external file:
RESOURCEIDRESOURCETYPEDISCARDABLE"ResourceFileName"
for example:
DAKSIG   SIG   DISCARDABLE   "Dakusan.sig"
RESOURCEID and RESOURCETYPE are arbitrary and user defined, and it should also be noted to usually have them in caps, as the compilers seem to often be picky about case.

The second way is through inlining the data:
RESOURCEID   RESOURCETYPE
BEGIN
   DATA
END
for example:
DakSig   Sig
BEGIN
   0x32DA,0x2ACF,0x0306,...
END
Getting the data in the right format for the resource script is a relatively simple task.
  • First, acquire the data in 16-bit encoded format (HEX). I suggest WinHex for this job.
    On a side note, I have been using WinHex for ages and highly recommend it.  It’s one of the most well built and fully featured application suites I know if.
  • Lastly, convert the straight HEX DATA (“DA32CF2A0603...”) into an array of proper endian hex values (“0x32DA,0x2ACF,0x0306...”). This can be done with a global replace regular expression of “(..)(..)” to “0x$2$1,”. I recommend Editpad Pro for this kind of work, another of my favorite pieces of software. As a matter of fact, I am writing this post right now in it :-).

Here is where the real caveats and problems start falling into place. First, I noticed the resource data was corrupt for a few bytes at a certain location.  It turned out to be Visual Studio wanting line lengths in the resource file to be less than ~4175 characters, so I just added a line break at that point.

This idea worked great for the about page signature, which needed to be raw data anyways, but encoding the icon this way turned out to be impossible :-(. Visual Studio apparently requires external files be loaded if you want to use a pre-defined binary resource type (ICON, BITMAP, etc).  The simple solution would be to inline the icon as a user defined raw data type, but unfortunately, the Win32 icon loading API functions (LoadIcon, CreateIconFromResource, LoadImage, etc) only seemed to work with properly defined ICONs.  I believe the problem here is that when the compiler loads in the icon to include in the executable, it reformats it somewhat, so I would need to know this format.  Again, unfortunately, Win32 APIs failed me. FindResource/FindResourceEx wouldn’t let me load the data for ICON types for direct coping (or reverse engineering) :-(.  At this point, it wouldn’t be worth my time to try and get the proper format just to inline my Hyrulean Productions icon into resource scripts. I may come back to it later if I’m ever really bored.


This unfortunately brings back a lot of bad old memories regarding Win32 APIs.  A lot of the Windows system is really outdated, not nearly robust enough, or just way too obfuscated, and has, and still does, cause me innumerable migraines trying to get things working with their system.

As an example, I just added the first about page to a C project, and getting fonts working on the form was not only a multi-hour long knockdown drag out due to multiple issues, I ended up having to jury rig the final solution in exasperation due to time constraints. I wanted the C about pages to match the VB ones exactly, but font size numbers just wouldn’t conform between the VB GUI designer and Windows GDI (the Windows graphics API), so I just put in arbitrary font size numbers that matched visually instead of trying to find the right conversion process, as the documented font size conversion process was not yielding proper results.  This is the main reason VB (and maybe .NET) are far superior in my book when dealing with GUIs (for ease of use at least, not necessarily ability and power).  I know there are libraries out that supposedly solve this problem, but I have not yet found one that I am completely happy with, which is why I had started my own fully fledged cross operating system GUI library a ways back, but it won’t be completed for a long time.


425
Posts / Online credit card misinformation
« on: September 28, 2009, 05:31:04 am »

I was just doing my accounting and I noticed I had 3 double-charges on my Capital One credit card that all happened within a 2 day period.  I found this to be very odd since I have never been double-charged on any of my credit cards since I started using them 10 years ago when I was 14.

So I went ahead and submitted 2 charge disputes with Capital One, and a third with the other company I saw double-charged.  I then finished my accounting, and noticed that the balance showing up on my Capital One did not include those 3 charges. I validated my suspicions by calling up their customer relations department (getting a lady in India) and confirming that the charges only show up once in my account.

I then did my emails to rescind my previous queries into having the double-charges refunded, and also included in the email to Capital One that their web system (or possibly statement system) has an error and needs to be fixed.  The double-charges actually weren’t showing up on the same statements.  They showed up once (for May 16th and 17th) on my last month’s statement, and then again (May 17th and 19th) on my current month’s statement. Go Figure.


[Edit on 6/13/08] A few days ago, after an annoying downtime on the Capitol One credit card site, I noticed they added a new feature that now shows your latest charges within a certain period of days (15, 30, etc) instead of just the current billing cycle.  So I’m pretty sure the above problem was due to them implementing this new system without warning the user or having any indication of the system change in the interface. I do know how annoying change control is, and the problems that come along with implementing new features on websites which may temporarily confuse users, but I’d expect better from a multinational corporation like this. Then again, this isn’t the first time this kind of thing has happened on their website, so I shouldn’t be surprised.

426
Posts / Project About Pages
« on: September 28, 2009, 05:31:03 am »
Original post for Project About Pages can be found at https://www.castledragmire.com/Posts/Project_About_Pages.
Originally posted on: 05/26/08

About Window Concept

I’ve been thinking for a while that I need to add “about windows” to the executables of all my applications with GUIs. So I first made a test design [left, psd file attached]

Unfortunately, this requires at least 25KB for the background alone, and this is larger than many of my project executables themselves. This is a problem for me, as I like keeping executables small and simple.

PNG SignatureI therefore decided to scratch the background and just go with normal “about windows” and my signature in a spiffy font [BlizzardD]: (white background added by web browser for visibility)
The above PNG signature is only 1.66KB, so “yay”, right?  Wrong :-(, it quickly occurred to me that XP does not natively support PNG.

GIF SignatureMy next though is “what about a GIF?” (GIF is the predecessor to PNG, also lossless): (1.82KB)
I remembered that GIF files worked for VB, so I thought that native Windows’ API might support it too without adding in extra DLLs, but alas, I was wrong. This at least partially solves the problem for me in Visual Basic, but not fully, as GIF does not support translucency, but only 1 color of transparency, so the picture would look horribly aliased (pixilated).

The final solution I decided on is having a small translucency-mask and alpha-blending it and the primary signature color (RGB(6,121,6)) to the “about windows’ ” background.
GIF Signature MaskSince alpha-blending/translucency is an 8 bit value, a gray-scale (also 8 bits per pixel) image is perfect for a translucency mask format for VB: (1.82KB, GIF)
You may note that this GIF is the exact same size as the previous GIF, which makes sense as it is essentially the exact same picture, just with swapped color palettes.

The final hurdle is how to import the picture into C with as little space wasted as possible.  The solution to this is to create an easily decompressable alpha-mask (alpha means translucency).
BMP Signature Mask I started with the bitmap mask: (25.6KB, BMP)
From there, I figured there would be 2 easy formats for compression that would take very little code to decompress:
  • Number of Transparent Pixels, Number of Foreground Pixels in a Row, List of Foreground Pixel Masks, REPEAT... (This is a form of “Run-length encoding”)
  • Start the whole image off as transparent, and then list each group of foreground pixels with: X Start Coordinate, Y Start Coordinate, Number of Pixels in a Row, List of Foreground Pixel Masks
It also helped that there were only 16 different alpha-masks, not including the fully transparent mask, so each alpha-mask could be fit within half a byte (4 bits).  I only did the first option because I’m pretty sure the second one would be larger because it would take more bits for an x/y location than for a transparent run length number.

Other variants could be used too, like counting the background as a normal mask index and just do straight run length encoding with indexes, but I knew this would make the file much larger for 2 reasons: this would add a 17th alpha-mask which would push index sizes up to 5 bits, and background run lengths are much longer (in this case 6 bits), so all runs would need to be longer (non-background runs are only 3 bits in this case). Anyways, it ended up creating a 1,652 byte file :-).


This could also very easily be edited to input/output 8-bit indexed bitmaps, or full color bitmaps even (with a max of 256 colors, or as many as you wanted with a few more code modifications). If one wanted to use this for normal pictures with a solid background instead of an alpha-mask, just know the words “Transparent” means “Background” and “Translucent” means “Non-Background” in the code.

GIF and PNG file formats actually use similar techniques, but including the code for their decoders would cause a lot more code bloat than I wanted, especially since they [theoretically] include many more compression techniques than just run-length encoding.  Programming for specific cases will [theoretically] always be smaller and faster than programming for general cases.  On a side note, from past research I’ve done on the JPEG format, along with programming my NES Emulator, Hynes, they [JPEG & NES] share the same main graphical compression technique [grouping colors into blocks and only recording color variations].


The following is the code to create the compressed alpha-mask stream: [Direct link to C file with all of the following code blocks]

//** Double stars denotes changes for custom circumstance [The About Window Mask]
#include <windows.h>
#include <stdio.h>
#include <conio.h>

//Our encoding functions
int ErrorOut(char* Error, FILE* HandleToClose); //If an error occurs, output
UINT Encode(UCHAR* Input, UCHAR* Output, UINT Width, UINT Height); //Encoding process
UCHAR NumBitsRequired(UINT Num); //Tests how many bits are required to represent a number
void WriteToBits(UCHAR* StartPointer, UINT BitStart, UINT Value); //Write Value to Bit# BitStart after StartPointer - Assumes more than 8 bits are never written

//Program constants
const UCHAR BytesPerPixel=3, TranspMask=255; //24 bits per pixel, and white = transparent background color

//Encoding file header
typedef struct
{
   USHORT DataSize; //Data size in bits - **Should be UINT
   UCHAR Width, Height; //**Should be USHORTs
   UCHAR TranspSize, TranslSize; //Largest number of bits required for a run length for Transp[arent] and Transl[ucent]
   UCHAR NumIndexes, Indexes[0]; //Number and list of indexes
} EncodedFileHeader;

int main()
{
   UCHAR *InputBuffer, *OutputBuffer; //Where we will hold our input and output data
   FILE *File; //Handle to current input or output file
   UINT FileSize; //Holds input and output file sizes

   //The bitmap headers tell us about its contents
   BITMAPFILEHEADER BitmapFileHead;
   BITMAPINFOHEADER BitmapHead;

   //Read in bitmap header and confirm file type
   File=fopen("AboutWindow-Mask.bmp", "rb"); //Normally you'd read in the filename from passed arguments (argv)
   if(!File) //Confirm file open
      return ErrorOut("Cannot open file for reading", NULL);
   fread(&BitmapFileHead, sizeof(BITMAPFILEHEADER), 1, File);
   if(BitmapFileHead.bfType!=*(WORD*)"BM" || BitmapFileHead.bfReserved1 || BitmapFileHead.bfReserved2) //Confirm we are opening a bitmap
      return ErrorOut("Not a bitmap", File);

   //Read in the rest of the data
   fread(&BitmapHead, sizeof(BITMAPINFOHEADER), 1, File);
   if(BitmapHead.biPlanes!=1 || BitmapHead.biBitCount!=24 || BitmapHead.biCompression!=BI_RGB) //Confirm bitmap type - this code would probably have been simpler if I did an 8 bit indexed file instead... oh well, NBD.  **It has also been programmed for easy transition to 8 bit indexed files via the "BytesPerPixel" constant.
      return ErrorOut("Bitmap must be in 24 bit RGB format", File);
   FileSize=BitmapFileHead.bfSize-sizeof(BITMAPINFOHEADER)-sizeof(BITMAPFILEHEADER); //Size of the data portion
   InputBuffer=malloc(FileSize);
   fread(InputBuffer, FileSize, 1, File);
   fclose(File);

   //Run Encode
   OutputBuffer=malloc(FileSize); //We should only ever need at most FileSize space for output (output should always be smaller)
   memset(OutputBuffer, 0, FileSize); //Needs to be zeroed out due to how writing of data file is non sequential
   FileSize=Encode(InputBuffer, OutputBuffer, BitmapHead.biWidth, BitmapHead.biHeight); //Encode the file and get the output size

   //Write encoded data out
   File=fopen("Output.msk", "wb");
   fwrite(OutputBuffer, FileSize, 1, File);
   fclose(File);
   printf("File %d written with %d bytes\n", 1, FileSize);

   //Free up memory and wait for user input
   free(InputBuffer);
   free(OutputBuffer);
   getch(); //Pause for user input
   return 0;
}

int ErrorOut(char* Error, FILE* HandleToClose) //If an error occurs, output
{
   if(HandleToClose)
      fclose(HandleToClose);
   printf("%s\n", Error);
   getch(); //Pause for user input
   return 1;
}

UINT Encode(UCHAR* Input, UCHAR* Output, UINT Width, UINT Height) //Encoding process
{
   UCHAR Indexes[256], NumIndexes, IndexSize, RowPad; //The index re-mappings, number of indexes, number of bits an index takes in output data, padding at input row ends for windows bitmaps
   USHORT TranspSize, TranslSize; //Largest number of bits required for a run length for Transp[arent] (zero) and Transl[ucent] (non zero) - should be UCHAR's, but these are used as explained under "CurTranspLen" below
   UINT BitSize, x, y, ByteOn, NumPixels; //Current output size in bits, x/y coordinate counters, current byte location in Input, number of pixels in mask

   //Calculate some stuff
   NumPixels=Width*Height; //Number of pixels in mask
   RowPad=4-(Width*BytesPerPixel%4); //Account for windows DWORD row padding - see declaration comment
   RowPad=(RowPad==4 ? 0 : RowPad);

   { //Do a first pass to find number of different mask values, run lengths, and their encoded values
      const UCHAR UnusedIndex=255; //In our index list, unused indexes are marked with this constant
      USHORT CurTranspLen, CurTranslLen; //Keep track of the lengths of the current transparent & translucent runs - TranspSize and TranslSize are temporarily used to hold the maximum run lengths
      //Zero out all index references and counters
      memset(Indexes, UnusedIndex, 256);
      NumIndexes=0;
      TranspSize=TranslSize=CurTranspLen=CurTranslLen=0;
      //Start gathering data
      for(y=ByteOn=0;y<Height;y++) //Column
      {
         for(x=0;x<Width;x++,ByteOn+=BytesPerPixel) //Row
         {
            UCHAR CurMask=Input[ByteOn]; //Curent alpha mask
            if(CurMask!=TranspMask) //Translucent value?
            {
               //Determine if index has been used yet
               if(Indexes[CurMask]==UnusedIndex) //We only need to check 1 byte per pixel as they are all the same for gray-scale **This would need to change if using non 24-bit or non gray-scale
               {
                  ((EncodedFileHeader*)Output)->Indexes[NumIndexes]=CurMask; //Save mask number in the index header
                  Indexes[CurMask]=NumIndexes++; //Save index number to the mask
               }

               //Length of current transparent run
               TranspSize=(CurTranspLen>TranspSize ? CurTranspLen : TranspSize); //Max(CurTranspLen, TranspSize)
               CurTranspLen=0;

               //Length of current translucent run
               CurTranslLen++;
            }
            else //Transparent value?
            {
               //Length of current translucent run
               TranslSize=(CurTranslLen>TranslSize ? CurTranslLen : TranslSize);  //Max(CurTranslLen, TranslSize)
               CurTranslLen=0;

               //Length of current transparent run
               CurTranspLen++;
            }
         }

         ByteOn+=RowPad; //Account for windows DWORD row padding
      }
      //Determine number of required bits per value
      printf("Number of Indexes: %d\nLongest Transparent Run: %d\nLongest Translucent Run: %d\n", NumIndexes,
         TranspSize=CurTranspLen>TranspSize ? CurTranspLen : TranspSize, //Max(CurTranspLen, TranspSize)
         TranslSize=CurTranslLen>TranslSize ? CurTranslLen : TranslSize  //Max(CurTranslLen, TranslSize)
         );
      IndexSize=NumBitsRequired(NumIndexes);
      TranspSize=NumBitsRequired(TranspSize); //**This is currently overwritten a few lines down
      TranslSize=NumBitsRequired(TranslSize); //**This is currently overwritten a few lines down
      printf("Bit Lengths of - Indexes, Trasparent Run Length, Translucent Run Length: %d, %d, %d\n", IndexSize, TranspSize, TranslSize);
   }

   //**Modify run sizes (custom) - this function could be run multiple times with different TranspSize and TranslSize until the best values are found - the best values would always be a weighted average
   TranspSize=6;
   TranslSize=3;

   //Start processing data
   BitSize=(sizeof(EncodedFileHeader)+NumIndexes)*8; //Skip the file+bitmap headers and measure in bits
   x=ByteOn=0;
   do
   {
      //Transparent run
      UINT CurRun=0;
      while(Input[ByteOn]==TranspMask && x<NumPixels && CurRun<(UINT)(1<<TranspSize)-1) //Final 2 checks are for EOF and capping run size to max bit length
      {
         x++;
         CurRun++;
         ByteOn+=BytesPerPixel;
         if(x%Width==0) //Account for windows DWORD row padding
            ByteOn+=RowPad;
      }
      WriteToBits(Output, BitSize, CurRun);
      BitSize+=TranspSize;

      //Translucent run
      CurRun=0;
      BitSize+=TranslSize; //Prepare to start writing masks first
      while(x<NumPixels && Input[ByteOn]!=TranspMask && CurRun<(UINT)(1<<TranslSize)-1) //Final 2 checks are for EOF and and capping run size to max bit length
      {
         WriteToBits(Output, BitSize+CurRun*IndexSize, Indexes[Input[ByteOn]]);
         x++;
         CurRun++;
         ByteOn+=BytesPerPixel;
         if(x%Width==0) //Account for windows DWORD row padding
            ByteOn+=RowPad;
      }
      WriteToBits(Output, BitSize-TranslSize, CurRun); //Write the mask before the indexes
      BitSize+=CurRun*IndexSize;
   } while(x<NumPixels);

   { //Output header
      EncodedFileHeader *OutputHead;
      OutputHead=(EncodedFileHeader*)Output;
      OutputHead->DataSize=BitSize-(sizeof(EncodedFileHeader)+NumIndexes)*8; //Length of file in bits not including header
      OutputHead->Width=Width;
      OutputHead->Height=Height;
      OutputHead->TranspSize=(UCHAR)TranspSize;
      OutputHead->TranslSize=(UCHAR)TranslSize;
      OutputHead->NumIndexes=NumIndexes;
   }
   return BitSize/8+(BitSize%8 ? 1 : 0); //Return entire length of file in bytes
}

UCHAR NumBitsRequired(UINT Num) //Tests how many bits are required to represent a number
{
   UCHAR RetNum;
   _asm //Find the most significant bit
   {
      xor eax, eax //eax=0
      bsr eax, Num //Find most significant bit in eax
      mov RetNum, al
   }
   return RetNum+((UCHAR)(1<<RetNum)==Num ? 0 : 1); //Test if the most significant bit is the only one set, if not, at least 1 more bit is required
}

void WriteToBits(UCHAR* StartPointer, UINT BitStart, UINT Value) //Write Value to Bit# BitStart after StartPointer - Assumes more than 8 bits are never written
{
   *(WORD*)(&StartPointer[BitStart/8])|=Value<<(BitStart%8);
}

The code to decompress the alpha mask in C is as follows: (Shares some header information with above code)

//Decode
void Decode(UCHAR* Input, UCHAR* Output); //Decoding process
UCHAR ReadBits(UCHAR* StartPointer, UINT BitStart, UCHAR BitSize); //Read value from Bit# BitStart after StartPointer - Assumes more than 8 bits are never read
UCHAR NumBitsRequired(UINT Num); //Tests how many bits are required to represent a number --In Encoding Code--

int main()
{
   //--Encoding Code--
      UCHAR *InputBuffer, *OutputBuffer; //Where we will hold our input and output data
      FILE *File; //Handle to current input or output file
      UINT FileSize; //Holds input and output file sizes
   
      //The bitmap headers tell us about its contents
      //Read in bitmap header and confirm file type
      //Read in the rest of the data
      //Run Encode
      //Write encoded data out
   //--END Encoding Code--

   //Run Decode
   UCHAR* O2=(BYTE*)malloc(BitmapFileHead.bfSize);
   Decode(OutputBuffer, O2);

/*   //If writing back out to a 24 bit windows bitmap, this adds the row padding back in
   File=fopen("output.bmp", "wb");
   fwrite(&BitmapFileHead, sizeof(BITMAPFILEHEADER), 1, File);
   fwrite(&BitmapHead, sizeof(BITMAPINFOHEADER), 1, File);
   fwrite(O2, BitmapFileHead.bfSize-sizeof(BITMAPINFOHEADER)-sizeof(BITMAPFILEHEADER), 1, File);*/

   //Free up memory and wait for user input --In Encoding Code--
   return 0;
}

//Decoding
void Decode(UCHAR* Input, UCHAR* Output) //Decoding process
{
   EncodedFileHeader H=*(EncodedFileHeader*)Input; //Save header locally so we have quick memory lookups
   UCHAR Indexes[256], IndexSize=NumBitsRequired(H.NumIndexes); //Save indexes locally so we have quick lookups, use 256 index array so we don't have to allocate memory
   UINT BitOn=0; //Bit we are currently on in reading
   memcpy(Indexes, ((EncodedFileHeader*)Input)->Indexes, 256); //Save the indexes
   Input+=(sizeof(EncodedFileHeader)+H.NumIndexes); //Start reading input past the header

   //Unroll/unencode all the pixels
   do
   {
      UINT i, l; //index counter, length (transparent and then index)
      //Transparent pixels
      memset(Output, TranspMask, l=ReadBits(Input, BitOn, H.TranspSize)*BytesPerPixel);
      Output+=l;

      //Translucent pixels
      l=ReadBits(Input, BitOn+=H.TranspSize, H.TranslSize);
      BitOn+=H.TranslSize;
      for(i=0;i<l;i++) //Write the gray scale out to the 3 pixels, this should technically be done in a for loop, which would unroll itself anyways, but this way ReadBits+index lookup is only done once - ** Would need to be in a for loop if not using gray-scale or 24 bit output
         Output[i*BytesPerPixel]=Output[i*BytesPerPixel+1]=Output[i*BytesPerPixel+2]=Indexes[ReadBits(Input, BitOn+i*IndexSize, IndexSize)];
      Output+=l*BytesPerPixel;
      BitOn+=l*IndexSize;
   } while(BitOn<H.DataSize);

/*   { //If writing back out to a 24 bit windows bitmap, this adds the row padding back in
      UINT i;
      UCHAR RowPad=4-(H.Width*BytesPerPixel%4); //Account for windows DWORD row padding
      RowPad=(RowPad==4 ? 0 : RowPad);
      Output-=H.Width*H.Height*BytesPerPixel; //Restore original output pointer
      for(i=H.Height;i>0;i--) //Go backwards so data doesn't overwrite itself
         memcpy(Output+(H.Width*BytesPerPixel+RowPad)*i, Output+(H.Width*BytesPerPixel)*i, H.Width*BytesPerPixel);
   }*/
}

UCHAR ReadBits(UCHAR* StartPointer, UINT BitStart, UCHAR BitSize) //Read value from Bit# BitStart after StartPointer - Assumes more than 8 bits are never read
{
   return (*(WORD*)&StartPointer[BitStart/8]>>BitStart%8)&((1<<BitSize)-1);
}

Of course, I added some minor assembly and optimized the decoder code to get it from 335 to 266 bytes, which is only 69 bytes less :-\, but it’s something (measured using my Small project). There is no real reason to include it here, as it’s in many of my projects and the included C file for this post.

And then some test code just for kicks...

//Confirm Decoding
BOOL CheckDecode(UCHAR* Input1, UCHAR* Input2, UINT Width, UINT Height); //Confirm Decoding

//---- Put in main function above "//Free up memory and wait for user input" ----
printf(CheckDecode(InputBuffer, O2, BitmapHead.biWidth, BitmapHead.biHeight) ? "good" : "bad");

BOOL CheckDecode(UCHAR* Input1, UCHAR* Input2, UINT Width, UINT Height) //Confirm Decoding
{
   UINT x,y,i;
   UCHAR RowPad=4-(Width*BytesPerPixel%4); //Account for windows DWORD row padding
   RowPad=(RowPad==4 ? 0 : RowPad);

   for(y=0;y<Height;y++)
      for(x=0;x<Width;x++)
         for(i=0;i<BytesPerPixel;i++)
            if(Input1[y*(Width*BytesPerPixel+RowPad)+x*BytesPerPixel+i]!=Input2[y*(Width*BytesPerPixel)+x*BytesPerPixel+i])
               return FALSE;
   return TRUE;
}

From there, it just has to be loaded into a bit array for manipulation and set back a bitmap device context, and it’s done!
VB Code: (Add the signature GIF as a picture box where it is to show up and set its “Visible” property to “false” and “Appearance” to “flat”)

'Swap in and out bits
Private Declare Function GetDIBits Lib "gdi32" (ByVal aHDC As Long, ByVal hBitmap As Long, ByVal nStartScan As Long, ByVal nNumScans As Long, lpBits As Any, lpBI As BITMAPINFOHEADER, ByVal wUsage As Long) As Long
Private Declare Function SetDIBitsToDevice Lib "gdi32" (ByVal hdc As Long, ByVal x As Long, ByVal y As Long, ByVal dx As Long, ByVal dy As Long, ByVal SrcX As Long, ByVal SrcY As Long, ByVal Scan As Long, ByVal NumScans As Long, Bits As Any, BitsInfo As BITMAPINFOHEADER, ByVal wUsage As Long) As Long
lpBits As Any, lpBitsInfo As BITMAPINFOHEADER, ByVal wUsage As Long, ByVal dwRop As Long) As Long
Private Type RGBQUAD
      b As Byte
      g As Byte
      r As Byte
      Reserved As Byte
End Type
Private Type BITMAPINFOHEADER '40 bytes
      biSize As Long
      biWidth As Long
      biHeight As Long
      biPlanes As Integer
      biBitCount As Integer
      biCompression As Long
      biSizeImage As Long
      biXPelsPerMeter As Long
      biYPelsPerMeter As Long
      biClrUsed As Long
      biClrImportant As Long
End Type
Private Const DIB_RGB_COLORS = 0 '  color table in RGBs

'Prepare colors
Private Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" (Destination As Any, Source As Any, ByVal Length As Long)
Private Declare Function GetBkColor Lib "gdi32" (ByVal hdc As Long) As Long

Public Sub DisplaySignature(ByRef TheForm As Form)
   'Read in Signature
   Dim BitmapLength As Long, OutBitmap() As RGBQUAD, BitInfo As BITMAPINFOHEADER, Signature As PictureBox
   Set Signature = TheForm.Signature
   BitmapLength = Signature.Width * Signature.Height
   ReDim OutBitmap(0 To BitmapLength - 1) As RGBQUAD
   With BitInfo
           .biSize = 40
           .biWidth = Signature.Width
           .biHeight = -Signature.Height
           .biPlanes = 1
           .biBitCount = 32
           .biCompression = 0 'BI_RGB
           .biSizeImage = .biWidth * 4 * -.biHeight
   End With
   GetDIBits Signature.hdc, Signature.Image, 0, Signature.Height, OutBitmap(0), BitInfo, DIB_RGB_COLORS
   
   'Alpha blend signature
   Dim i As Long, Alpha As Double, BackColor As RGBQUAD, ForeColor As RGBQUAD, OBC As Long, OFC As Long
   OFC = &H67906
   OBC = GetBkColor(TheForm.hdc)
   CopyMemory BackColor, OBC, 4
   CopyMemory ForeColor, OFC, 4
   For i = 0 To BitmapLength - 1
       Alpha = 1 - (CDbl(OutBitmap(i).r) / 255)
       OutBitmap(i).r = ForeColor.r * Alpha + BackColor.r * (1 - Alpha)
       OutBitmap(i).g = ForeColor.g * Alpha + BackColor.g * (1 - Alpha)
       OutBitmap(i).b = ForeColor.b * Alpha + BackColor.b * (1 - Alpha)
   Next i
   
   SetDIBitsToDevice TheForm.hdc, Signature.Left, Signature.Top, Signature.Width, Signature.Height, 0, 0, 0, Signature.Height, OutBitmap(0), BitInfo, DIB_RGB_COLORS
   TheForm.Refresh
End Sub

C Code

//Prepare to decode signature
   //const UCHAR BytesPerPixel=4, TranspMask=255; //32 bits per pixel (for quicker copies and such - variable not used due to changing BYTE*s to DWORD*s), and white=transparent background color - also not used anymore since we directly write in the background color
   //Load data from executable
   HGLOBAL GetData=LoadResource(NULL, FindResource(NULL, "DakSig", "Sig")); //Load the resource from the executable
   BYTE *Input=(BYTE*)LockResource(GetData); //Get the resource data

   //Prepare header and decoding data
   UINT BitOn=0; //Bit we are currently on in reading
   EncodedFileHeader H=*(EncodedFileHeader*)Input; //Save header locally so we have quick memory lookups
   DWORD *Output=Signature=new DWORD[H.Width*H.Height]; //Allocate signature memory

   //Prepare the index colors
   DWORD Indexes[17], IndexSize=NumBitsRequired(H.NumIndexes); //Save full color indexes locally so we have quick lookups, use 17 index array so we don't have to allocate memory (since we already know how many there will be), #16=transparent color
   DWORD BackgroundColor=GetSysColor(COLOR_BTNFACE), FontColor=0x067906;
   BYTE *BGC=(BYTE*)&BackgroundColor, *FC=(BYTE*)&FontColor;
   for(UINT i=0;i<16;i++) //Alpha blend the indexes
   {
      float Alpha=((EncodedFileHeader*)Input)->Indexes[i] / 255.0f;
      BYTE IndexColor[4];
      for(int n=0;n<3;n++)
         IndexColor[n]=(BYTE)(BGC[n]*Alpha + FC[n]*(1-Alpha));
      //IndexColor[3]=0; //Don't really need to worry about the last byte as it is unused
      Indexes[i]=*(DWORD*)IndexColor;
   }
   Indexes[16]=BackgroundColor; //Translucent background = window background color

//Unroll/unencode all the pixels
Input+=(sizeof(EncodedFileHeader)+H.NumIndexes); //Start reading input past the header
do
{
   UINT l; //Length (transparent and then index)
   //Transparent pixels
   memsetd(Output, Indexes[16], l=ReadBits(Input, BitOn, H.TranspSize));
   Output+=l;

   //Translucent pixels
   l=ReadBits(Input, BitOn+=H.TranspSize, H.TranslSize);
   BitOn+=H.TranslSize;
   for(i=0;i<l;i++) //Write the gray scale out to the 3 pixels, this should technically be done in a for loop, which would unroll itself anyways, but this way ReadBits+index lookup is only done once - ** Would need to be in a for loop if not using gray-scale or 24 bit output
      Output[i]=Indexes[ReadBits(Input, BitOn+i*IndexSize, IndexSize)];
   Output+=l;
   BitOn+=l*IndexSize;
} while(BitOn<H.DataSize);

//Output the signature
const BITMAPINFOHEADER MyBitmapInfo={sizeof(BITMAPINFOHEADER), 207, 42, 1, 32, BI_RGB, 0, 0, 0, 0, 0};
SetDIBitsToDevice(MyDC, x, y, MyBitmapInfo.biWidth, MyBitmapInfo.biHeight, 0, 0, 0, MyBitmapInfo.biHeight, Signature, (BITMAPINFO*)&MyBitmapInfo, DIB_RGB_COLORS);

This all adds ~3.5KB to each VB project, and ~2KB to each C/CPP project. Some other recent additions to all project executables include the Hyrulean Productions icon (~1KB) and file version information (1-2KB). I know that a few KB doesn’t seem like much, but when executables are often around 10KB, it can almost double their size.

While I’m on the topic of project sizes, I should note that I always compress their executables with UPX, a very nifty executable compressor. It would often be more prudent to use my Small project, but I don’t want to complicate my open-source code.


One other possible solution I did not pursue would be to take the original font and create a subset font of it with only the letters (and font size?) I need, and see if that file is smaller. I doubt it would have worked well though.

427
Posts / Always Confirm Potentially Hazardous Actions
« on: September 28, 2009, 05:31:02 am »

So I have been having major speed issues with one of our servers. After countless hours of diagnoses, I determined the bottle neck was always I/O (input/output, accessing the hard drive).  For example, when running an MD5 hash on a 600MB file load would jump up to 31 with 4 logical CPUs and it would take 5-10 minutes to complete. When performing the same test on the same machine on a second drive it finished within seconds.

Replacing the hard drive itself is a last resort for a live production server, and a friend suggested the drive controller could be the problem, so I confirmed that the drive controller for our server was not on-board (on its own card), and I attempted to convince the company hosting our server of the problem so they would replace the drive controller. I ran my own tests first with an iostat check while doing a read of the main hard drive (cat /etc/sda > /dev/null). This produced steadily worsening results the longer the test went on, and always much worse than our secondary drive. I passed these results on to the hosting company, and they replied that a “badblocks –vv” produced results that showed things looked fine.

So I was about to go run his test to confirm his findings, but decided to check parameters first, as I always like to do before running new Linux commands.  Thank Thor I did. The admin had meant to write “badblocks –v” (verbose) and typoed with a double key stroke. The two v’s looked like a w due to the font, and had I ran a “badblocks –w” (write-mode test), I would have wiped out the entire hard drive.

Anyways, the test outputted the same basic results as my iostat test with throughput results very quickly decreasing from a remotely acceptable level to almost nil.  Of course, the admin only took the best results of the test, ignoring the rest.

I had them swap out the drive controller anyways, and it hasn’t fixed things, so a hard drive replace will probably be needed soon.  This kind of problem would be trivial if I had access to the server and could just test the hardware myself, but that is a price to pay for proper security at a server farm.


428
Posts / Useful Bash commands and scripts
« on: September 28, 2009, 05:31:01 am »

First, to find out more about any bash command, use
man COMMAND

Now, a primer on the three most useful bash commands: (IMO)
find:
Find will search through a directory and its subdirectories for objects (files, directories, links, etc) satisfying its parameters.
Parameters are written like a math query, with parenthesis for order of operations (make sure to escape them with a “\”!), -a for boolean “and”, -o for boolean “or”, and ! for “not”.  If neither -a or -o is specified, -a is assumed.
For example, to find all files that contain “conf” but do not contain “.bak” as the extension, OR are greater than 5MB:
find -type f \( \( -name "*conf*" ! -name "*.bak" \) -o -size +5120k \)
Some useful parameters include:
  • -maxdepth & -mindepth: only look through certain levels of subdirectories
  • -name: name of the object (-iname for case insensitive)
  • -regex: name of object matches regular expression
  • -size: size of object
  • -type: type of object (block special, character special, directory, named pipe, regular file, symbolic link, socket, etc)
  • -user & -group: object is owned by user/group
  • -exec: exec a command on found objects
  • -print0: output each object separated by a null terminator (great so other programs don’t get confused from white space characters)
  • -printf: output specified information on each found object (see man file)

For any number operations, use:
+nfor greater than n
-nfor less than n
nfor exactly than n

For a complete reference, see your find’s man page.

xargs:
xargs passes piped arguments to another command as trailing arguments.
For example, to list information on all files in a directory greater than 1MB: (Note this will not work with paths with spaces in them, use “find -print0” and “xargs -0” to fix this)
find -size +1024k | xargs ls -l
Some useful parameters include:
  • -0: piped arguments are separated by null terminators
  • -n: max arguments passed to each command
  • -i: replaces “{}” with the piped argument(s)

So, for example, if you had 2 mirrored directories, and wanted to sync their modification timestamps:
cd /ORIGINAL_DIRECTORY
find -print0 | xargs -0 -i touch -m -r="{}" "/MIRROR_DIRECTORY/{}"

For a complete reference, see your xargs’s man page.

grep:
GREP is used to search through data for plain text, regular expression, or other pattern matches.  You can use it to search through both pipes and files.
For example, to get your number of CPUs and their speeds:
cat /proc/cpuinfo | grep MHz
Some useful parameters include:
  • -E: use extended regular expressions
  • -P: use perl regular expression
  • -l: output files with at least one match (-L for no matches)
  • -o: show only the matching part of the line
  • -r: recursively search through directories
  • -v: invert to only output non-matching lines
  • -Z: separates matches with null terminator

So, for example, to list all files under your current directory that contain “foo1”, “foo2”, or “bar”, you would use:
grep -rlE "foo(1|2)|bar"

For a complete reference, see your grep’s man page.

And now some useful commands and scripts:
List size of subdirectories:
du --max-depth=1
The --max-depth parameter specifies how many sub levels to list.
-h can be added for more human readable sizes.

List number of files in each subdirectory*:

#!/bin/bash
export IFS=$'\n' #Forces only newlines to be considered argument separators
for dir in `find -type d -maxdepth 1`
do
   a=`find $dir -type f | wc -l`;
   if [ $a != "0" ]
   then
      echo $dir $a
   fi
done
and to sort those results
SCRIPTNAME | sort -n -k2

List number of different file extensions in current directory and subdirectories:
find -type f | grep -Eo "\.[^\.]+$" | sort | uniq -c | sort -nr

Replace text in file(s):
perl -i -pe 's/search1/replace1/g; s/search2/replace2/g' FILENAMES
If you want to make pre-edit backups, include an extension after “-i” like “-i.orig”

Perform operations in directories with too many files to pass as arguments: (in this example, remove all files from a directory 100 at a time instead of using “rm -f *”)
find -type f | xargs -n100 rm -f

Force kill all processes containing a string:
killall -9 STRING

Transfer MySQL databases between servers: (Works in Windows too)
mysqldump -u LOCAL_USER_NAME -p LOCAL_DATABASE | mysql -u REMOTE_USER_NAME -p -D REMOTE_DATABASE -h REMOTE_SERVER_ADDRESS
“-p” specifies a password is needed

Some lesser known commands that are useful:
screen: This opens up a virtual console session that can be disconnected and reconnected from without stopping the session. This is great when connecting to console through SSH so you don’t lose your progress if disconnected.
htop: An updated version of top, which is a process information viewer.
iotop: A process I/O (input/output - hard drive access) information viewer.  Requires Python ? 2.5 and I/O accounting support compiled into the Linux kernel.
dig: Domain information retrieval. See “Diagnosing DNS Problems” Post for more information.

More to come later...

*Anything staring with  “#!/bin/bash” is intended to be put into a script.

429
Posts / Text Message Storage Limits
« on: September 28, 2009, 05:31:00 am »
Original post for Text Message Storage Limits can be found at https://www.castledragmire.com/Posts/Text_Message_Storage_Limits.
Originally posted on: 02/20/08

So I’ve been rather perturbed for a very long time at the 50/50 inbox/outbox limit of stored SMS text messages in all LG cell phones.  Other phones have similar limits, like a Samsung I have is limited to 100/50, and it just erases messages when an overflow occurs, as opposed to the nice prompts on my LG VX9800, with its QWERTY keyboard, which I love.

I have done some minor hacking on cell phones and tinkered with the firmware, but without a proper emulator, I would never be able to find out where the 50 cap is set and be able to make a hack for phones could store more.


So today, I was at a Verizon store [unimportant ordeal here] because I got a little bit of water on my LG phone and it was having issues.  Immediately after the spill, it had a bunch of problems including the battery thinking it was always charging, buttons on the front side sending two different buttons when pressed, and some other buttons not working.  I immediately set to shaking it out at all angles to get most of the water out (which there wasn’t much to begin with...), and then I thoroughly blow dried every opening into the inside circuitry.  This fixed everything but the worst problem, signal dropping.  Basically, the phone would lose any connection it made after about 5 seconds, so I couldn’t really answer or makes calls.  Fortunately I was still able to send and receive SMS messages, but received ones didn’t signal the server they were received, and I kept receiving them over and over and over until a connection finally stayed open long enough to tell the server I got it.
''So I took it back to the store to see if they could fix it, and all they tried was updating the firmware... but they said I could trade it in for another phone for $50, which I figured from the beginning is what I would have to do, and was a good idea anyways because of this [temporarily down].
''So they realized they had no replacements in stock... or at the warehouse... for the VX9800 OR the VX9900, which they said they’d upgrade me too if they couldn’t find and VX9800, and I wanted (yay).  So I was told to call back tomorrow and try again.  Bleh. Anyways, I was at the store
where I found out why this was.  Apparently, cell phones start slowing down considerably with too many stored SMSs.  I was told of a lady that had come in the previous week with 600+ stored messages and the phone took very long intervals to do anything, and clearing it fixed it.

I know that, on my phone at least, each SMS message is stored as a separate file, so my best guess as to the reason for this problem is that this creates too many entries in the file system for the phone to handle.  This seems like a rather silly and trivial problem to work around, but the cell phone manufactures can get away with it, as they have no good competitors that fix problems like this.


This is why we really need open source cell phones.  There have been word of open source phones in the works for years... but nothing too solid yet :-\.


So ANYWAYS, I had already started taking a different approach in early January to fix the problem of backing up SMS messages without having to sync them to your computer, which is a rather obnoxious work around.  I had been researching and planning to write a BREW application that extracts all SMS messages into a text file on your phone so that you don’t have to worry about the limits, and could download them to your computer whenever you wanted, with theoretically thousands of SMS messages archived on your phone.  Unfortunately, as usual, other things took over my time and the project was halted, but I will probably be getting back to it soon.


430
Posts / Video driver woes
« on: September 28, 2009, 05:30:59 am »
Original post for Video driver woes can be found at https://www.castledragmire.com/Posts/Video_driver_woes.
Originally posted on: 02/14/08

So I’ve recently switched over to an old Geforce4 Ti 4600 for TV output on my home server/TV station.  Unfortunately, my TV needs output resizing (underscan) due to being dropped a long ways back during transport from a Halo game, and the CRT output is misaligned.

If I recall, old Nvidia drivers allowed output resizing, but the latest available ones (which are rather old themselves, as NVidia stops supporting old cards with newer driver sets that have more options) that work for my card only allow repositioning of the output signal, so part of the screen is cut off.

The final solution was to tell VLC media player to output videos at 400:318 aspect ratio when in full screen to force a smaller width that I could then reposition to properly fit the screen.  A rather inelegant solution, but it works.  One of these days I’ll get myself a new TV :-).


431
Posts / Truecrypt 5.0 tribulations
« on: September 28, 2009, 05:30:58 am »
Original post for Truecrypt 5.0 tribulations can be found at https://www.castledragmire.com/Posts/Truecrypt_5.0_tribulations.
Originally posted on: 02/08/08

Just as is the case with windows, where you never install before at least the first service pack is released, so is the case with TrueCrypt, it seems.


TrueCrypt is open source, which is a major plus, and in my opinion, the best solution for encrypting data.  In a nutshell, TrueCrypt allows the creation of encrypted “container files” that when mounted act as a hard drive partition, accessible through a password and/or a key file.  The encryption, security, and speed are all top notch and the program runs completely transparent to the user after volume mounting, so I would highly recommend the program to anyone that has anything at all to hide :-).

It also has some other useful options like the ability to encrypt USB flash cards for opening at other locations without having TrueCrypt installed, and “hidden container files” in which a second hidden volume is contained within the same container, unlockable by a separate password/key file, which is great for plausible deniability.  I have been always been a fan of TrueCrypt since I first found and adopted it years ago, and would highly recommend it.


Unfortunately, TrueCrypt 5.0, which was just released a few days ago, does not yet meet quality standards.  It does all the old stuff it used to of course, and adds some great new features, but the multiple bugs I have found are forcing me to revert to an older version of it, and back to other 3rd party applications I have been using for other types of encryption.


The new feature, which I’ve been looking forward too for ages is pre-boot authentication volume encryption, which basically means encrypting 100% of your hard drive (partition) that contains Windows (or another OS) on it so you only have to put in your password during boot, and EVERYTHING is encrypted and safe, and impossible (by today’s standards) to access before the password is put in.  This is especially important for laptops due to the increased likelihood of it falling into others’ hands through loss or theft.  Unfortunately, full volume encryption has broken 2 things; the ability to put my laptop into hibernation (which was also a problem with other volume encryption programs I’ve tried in the past), and oddly enough, it broke my audio drivers so I have no sound XD.  So, I’m reverting back to BestCrypt Volume Encryption [v1.95.1], which I’ve also been using for quite a while, that does the same thing, but allows hibernation.  My only beefs with it are that it’s closed source, something that isn’t usually a problem in my book, but is for this case [security], and that hibernation is SLOW, probably due to the fact that it can no longer use DMA, due to needing to pass data through the CPU for encryption.  Another, technically not so important, feature TrueCrypt doesn’t include yet that most other volume encryption pre-boot authentication packages include is customized boot password prompt screens.  I’ve included my incredibly dorky screens (for BestCrypt Volume Encryption) below :-D.

The other thing that is broken, oddly enough, forcing me to revert to TrueCrypt 4.3a, is I can’t mount containers over a network anymore through Windows File and Print Sharing :-\.  Ah well, hopefully they’ll get these things fixed soon enough.



My boot password prompt, and no, I will not explain it, except that DarkSide was my previous computer handle a very good number of years ago.
My Boot Prompt

A boot prompt I made for a female friend, weeee, ASCII art ^_^;.
Friend’s Boot Prompt

And for reference, the ASCII chart.
ASCII chart
Note that when creating a screen for BestCrypt Volume Encryption, the characters 0x08 0x09 0x0A 0x0D are all invalid.  The  “&” is used to place the password prompt.

One other Volume Encryption I tried, which was just about as good, though I do not recall if it allowed hibernation, was DriveCrypt Plus Pack [v3.90G].  It also allowed bitmaps [pictures] for the boot password prompt screen.

432
Posts / Internet Explorer Identity Crisis
« on: September 28, 2009, 05:30:57 am »

Does anyone else find it odd that IE reports itself as ‘Mozilla’ if you access the navigator.appCodeName variable?  You can test this out by putting the following in your browser as the URLjavascript:alert(navigator.appCodeName), or you could check out this script, where I noticed this, which reports all information that can be found out about you through going to a web page, and accessible via JavaScript/PHP.

433
Posts / GTO (and other TV series)
« on: September 28, 2009, 05:30:56 am »
Original post for GTO (and other TV series) can be found at https://www.castledragmire.com/Posts/GTO_(and_other_TV_series).
Originally posted on: 01/15/08

I have been a very long time fan of the anime series GTO (Great Teacher Onizuka), though I have only ever owned and seen the first 4 of 10 DVDs.  The series is heavily geared towards adolescent males (shonen) and has its immaturity insecurities, but it’s still a great romantic comedy, with the romantic part paling to the comedy.


So I very recently acquired the rest of the series, and really wish I had just left it off on the forth DVD (19th episode), where the series planning obviously ended.  Up to that point, it was very strongly plot driven with character development as the primary outlet.  It then turned into entirely filler content with very loose and unrealistic plot.  The series was actually following the manga (comic) plot line through episode 14 when it bypassed it in timeline.  But really, I couldn’t believe how everything past that point was just so much a waste of time.  How people can turn such things of beauty (not necessarily the series visually, but the storyline...) into utter rubbish so quickly always catches me off guard, though I know I should be used to it by now.


Extending series past their originally planned plotline and churning out utter crap is a very common problem among television shows, and especially in anime, as the Japanese have a way of carrying things on for way too long.  Think Mario, Zelda, Pokemon, and Power Rangers, but those are just a few examples of Japanese long standing IPs that actually made it to America.  American’s may have a way for milking things for all they are worth for profit, but the Japanese not only have extra profit as a driving force, but also incredibly obsessive fan bases (Otaku) demanding more content.


Some other examples of this I have to mention off the top of my head are:
  • Nadia - See previous post for more information
  • Kodomo no Omocha (Kodocha), a SUPER girly (Shojo) anime, another of my favorite series, is 100% plot drive excellence.  Up through episode 19, which I believe to be the true ending of Season 1, the multitudes of brilliantly interweaving story arcs are breath taking and moving.  From this point, it continued on for another 83 episodes (102 total) of which I have only seen through episode 44.  While the general series worthiness seriously degrades at this turning point, it is still a lot of super-hyper-spastic-fun.
  • Full Metal Alchemist, yet another of my favorite series, is an actual example of this problem NOT happening, though it has it happen in a different form.  The series has a strong plot driven and well organized vibe that makes me believe the original 51 episodes were all mostly planned out from the start, but a few inconsistencies between beginning and late episodes makes me not entirely sure.  The problem comes in the form of the movie, which I felt to be a complete waste of time to watch.  I will expand upon this in the future.
  • The Simpsons, which really should have ended in season 3, which I like to call “Classic Simpsons”, turned into utter retard-like-babbling rubbish somewhere in seasons 7-10.  It was initially a very intriguing show, with witty characters (yes, homer was in a manner quite witty) and plot, but unfortunately, the series degraded by pushing the characters stereotypes way too far and making them boring, repetitive, and predictable, repeating the same basic plots and jokes time and time again.
  • And finally, Stargate SG1, which needed to end in Season 7 when the Goa’uld were pretty much defeated, and is still harboring a bastard child known as Stargate Atlantis.  While the shows may still have some basic entertainment value, they are still mere husks of their former glory.

434
Posts / Windows 98
« on: September 28, 2009, 05:30:55 am »
Original post for Windows 98 can be found at https://www.castledragmire.com/Posts/Windows_98.
Originally posted on: 01/12/08

So I just plopped in an old Win98 CD (in this case SP2) to grab the QBasic files off of it for the Languages and Libraries page.  I started browsing through the CD, and thought to myself “OMG... win98!”, heh.  So I installed it, and wow, am I ever in super nostalgia mode.

Things I now take for granted that were major Pains in the pre-XP days (well, pre NT kernel....):
  • Getting non-modem LAN connections on the internet: Win98 expected people to connect to the internet via phone modems, as broadband was still unheard of then.  The “Windows Connection Wizard” was a pain in the butt and you had to know just the right place to go to get it to recognize a NIC as a valid connection to the internet.
  • Shutting down windows improperly: If you failed to turn off the computer through the proper “Shut Down” method, the FAT file systems did not have certain type of safe-guards that NTFS does, and the computer was be forced to do a ScanDisk on startup.  A ScanDisk is also run the first time windows starts after install, and seeing this old piece of software really gave me a warm fuzzy feeling... or was it a feeling of utter nausea?
  • RAM allocation: The DOS-line-kernel of windows never properly kept track of memory from applications, and memory leaks in applications STAYED memory leaks after the program shut down, so RAM could very quickly get eaten up.  Programs called “RAM Scrubbers” were around to fix these detected memory leaks and free them.
  • Themes: Most people don’t know that windows themes actually originated with Microsoft Plus! for Windows 95 (I could have sworn it was originally called Windows Plus!... need to find my original CD) software package, which also first introduced the ever-popular and addicting Space Cadet Pinball (check the games folder that comes installed in XP).  Most Plus! options were usually integrated straight into later Windows versions or updates.  I have included below all the Themes that came with Windows 98 SE for nostalgic value :-).  Enjoy!

    Speaking of games, it seems 98SE also included FreeCell... I wasn’t aware it was that old.  I think the “Best of Windows Entertainment Pack” (with “Chips Challenge”, “Golf”, “Rodent’s Revenge”, “Tetris”, “SkiFree”, and some other fun games) also originally came on the Plus! CDs, but am not sure of this.  I believe the Best Pack also came with the CD packs that came with new computer from Packard Bell and maybe some other manufacturer for like 2 or 3 years in the mid 90s that also included the first game of one of my most favorite game series ever, Journey Man, as well as Microsoft Encarta, Britannica, a Cook Book CD and a Do-It-Yourself Book CD.  Good times!!!
  • Calendar: The calendar only displayed 2 digits for the year instead of 4... does this mean Microsoft was expecting everyone to switch from 98 immediately when their next OS (Windows ME [heh] or 2K) came out?  See “The Old New Thing” for another interesting problem of the windows calendar of old.
Things that made me laugh:
  • The first question asked during install was “You have a drive over 512mb in size, would you like to enable large disk support?”
  • All the 3d screensavers were OpenGL.  Though DirectX was out at that point, it was still in a state of sheer-crappiness so Microsoft still used OpenGL, which it wouldn’t be caught dead using nowadays ^_^.
  • During install, there were lots of messages touting the operating systems features, including “By converging real-time 2d and 3d graphics ... *MMX is a trademark of Intel Corporation”.  It just made me smile knowing that MMX was once so new Microsoft had to put a trademark warning like that.
  • Internet Explorer (5.0) started up at MSN.com already... which immediately crashed the browser! hehe
  • The windows update website informed me as follows: “Important: End of Support for Windows 98 and Windows ME
    Effective July 11, 2006, support for Windows 98, Windows 98 Second Edition and Windows ME (and their related components) will end. Updates for Windows 98 and Windows ME will be limited to those updates that currently appear on the Windows Update website.”
Things that I miss:
  • The emotion behind the OS.  For some reason, Windows 98 and 95 always had... a warmness to them that 2K/XP never had.  I’m not sure why... but the newer operating systems always had such a stiff and corporate feeling to them.
  • Winipcfg!  Now I am forced to go to the darn command prompt to do it via ipconfig (which was available then also), which is a pain when you have too many NICs and it scrolls the console window or when trying to help someone get their IP Address or MAC address.
  • Restart in MS-DOS mode!  Man do I ever miss that.  Especially for playing original DOOM. Good ’ol 640k ^_^.  The 3.x/95/98 kernels were really based upon DOS so it was valid to have a DOS only mode, but there’s nothing stopping them from including it on newer computers... well, except that DOS didn’t support NTFS, I guess... so it would be confusing.  Ah well.
  • FAST load time.  If I recall, Win98 always loaded bounds faster than XP... probably has to do with drivers.


Themes: (Owned by Microsoft?)
Baseball Dangerous Creatures Inside Your Computer Jungle Leonardo da Vinci More Windows Mystery Nature Science Space Sports The 60’s USA The Golden Era Travel Underwater Windows 98 Windows Default

Baseball:
Baseball Theme


Dangerous Creatures:
Dangerous Creatures Theme


Inside Your Computer:
Inside Your Computer Theme


Jungle:
Jungle Theme


Leonardo da Vinci:
Leonardo da Vinci Theme


More Windows:
More Windows Theme


Mystery:
Mystery Theme


Nature:
Nature Theme


Science:
Science Theme


Space:
Space Theme


Sports:
Sports Theme


The 60’s USA:
The 60’s USA Theme


The Golden Era:
The Golden Era Theme


Travel:
Travel Theme


Underwater:
Underwater Theme


Windows 98:
Windows 98 Theme


Windows Default:
Windows 98 Default Theme

435
Posts / The god complex
« on: September 28, 2009, 05:30:54 am »
Original post for The god complex can be found at https://www.castledragmire.com/Posts/The_god_complex.
Originally posted on: 01/05/08

Oops, life kind of hit me like a ton of bricks the last few days and I haven’t had time to get much done.  It didn’t help that I had a 72 hour straight run of wakefulness, then slept for about 24 hours straight :-).  *Shakes fist at certain medications*.  But now to continue on the second section of my previous medical post...


Medical science has come a very long way in the last 100 years, making very large important jumps all the time, but there is still a very very long way to go.  The “purpose” of the appendix was just “officially” found not too long ago, and if something that simple took that long to find out... But anyways, most of where we are in medicine still involves a lot of guessing and fuzzy logic.  While we do know many things for certain, diagnosing is still more often than not guess work due to what the patient can describe.  Even when we know what the problem is, we still aren’t even sure of the definite cause, and without that, we can only make educated guesses in how to treat them.  Sometimes we even have the knowledge to diagnose a problem, but it may be too expensive, in a time+effort vs gains manner, or possibly too early in developmental stages and not considered proper yet.  Then again, sometimes we even do have the answers but they are being withheld for “evil” purposes.  Anyways, I have 4 stories I’d like to share today on this topic to drive my point home.


First, I’ll get my own story out of the way.   A couple of years back, my appendix burst, I assumed it was just my IBS, as stated in my previous post.   Two days afterwards, I went to the doctor and we specifically said we wanted to rule out appendicitis as a cause, so they took my x-ray, and it somehow turned up as negative... so I was diagnosed with constipation, as my mother was often noting as what she thought it must be.

So on the way out of the office, stepping out of the door, I stopped and asked the doctor if they could take a blood sample so I could see how my cholesterol was doing (been fighting high cholesterol for a long time, the medication I take for it works wonders), and they did.   So I took some laxatives, and 3 days later I was still in lots of pain and lots of other problems.   So the call from the doctor came in the middle of that Monday, having gone to the doctor mid-Friday, right before I was about to call them back, and I was instructed to go straight to the hospital, as my (white?) blood cell count was super high.  Thank Thor I asked them.

So I go to the hospital, they do a few tests, one involving drinking a liter of a liquid that tasted like chalk beforehand, which I had to do once on a return visit too, and they come back and tell me my appendix had burst, and somehow, miraculously, I wasn’t dead due to a pocket forming and containing the toxin, and I was to go into surgery within hours.  Obviously, everything went relatively well, as I am still here.

There was one really painful night, though, with a temperature so high that I was apparently hallucinating, and I don’t remember.  So I got out of the hospital after a week... and then immediately went back in that night due to a bacteria infection and was on antibiotics for another week.  At least I didn’t need morphine (ah gee...) that second week.

On a more silly note, right before going into surgery, I jokingly asked my female surgeon how long it would take, as I had to log into my computer every (5?) hours for security or it would erase all my porn (or something like that).  Well, the poor naive doctor took it seriously, and literally turned as red as an apple, at which point I had to rescind my statement and explain I was just joking ^_^;.


Second story is much more recent.  Can’t go into details, but a friend of mine was at the hospital with some stomach problems, and the doctors came back with congratulations, in that she was pregnant.  After finally convincing them that she could not possibly be pregnant and was pretty sure she wasn’t carrying the reincarnation of Jesus, they did more tests and found out it was a rather nasty cyst in her (uterus?); good job doc(s)... so she had it removed.  They determined very soon after when the bloodwork came back what type of cancer it was... so she’s been in very aggressive therapy since.


The next story has been a long time upset of mine.  A female cousin of mine, who has always been as sweet as can be, contracted lime disease.  This in and of itself wouldn’t have been a problem normally, except that she and her parents had to go doctor hopping for well over a year to finally get it properly diagnosed. By this advanced stage of the problem, it was too late to be able treat it properly with no after effects, so she has lost most of the last 5+ years of her life to the disease and the incredible lethargy and problems it causes.

They have been trying many many ways to cure the problem, and are finally hopeful at a new possible solution they’ve found.  I hope to Thor it works out and she can start living her life to the fullest again; which actually parallels the next story quite well.


I saved this one for last because it involves a celebrity :-).  Scott Adams, creator/artist of the Dilbert comic strip, had been afflicted for a few years with Spasmodic Dysphonia, which causes an inability to speak in certain situations.  After going through the prescribed medical procedure involving long needles several times per year for the rest of your life, he finally found a doctor who had a very large success rate of curing the illness, and it worked for him too.

Apparently, the pharmaceutical industry shuts out any info they can about the proper treatment, as they make fists of money peddling out their very expensive temporary botox treatments that often don’t work well or at all.


Long story short, our medical industry has a long way to go before I consider it a true science, the first step being saving it from the grip of the pharmaceutical giant.



Scott Adam’s Blog Posts:
Good News Day (October 24, 2006): Original Post, Archive

Good News Day

As regular readers of my blog know, I lost my voice about 18 monthsago. Permanently. It’s something exotic called Spasmodic Dysphonia.Essentially a part of the brain that controls speech just shuts down insome people, usually after you strain your voice during a bout withallergies (in my case) or some other sort of normal laryngitis. Ithappens to people in my age bracket.

I asked my doctor – a specialist for this condition – how manypeople have ever gotten better. Answer: zero. While there’s no cure,painful Botox injections through the front of the neck and into thevocal cords can stop the spasms for a few months. That weakens themuscles that otherwise spasm, but your voice is breathy and weak.

The weirdest part of this phenomenon is that speech is processed indifferent parts of the brain depending on the context. So people withthis problem can often sing but they can’t talk. In my case I could domy normal professional speaking to large crowds but I could barelywhisper and grunt off stage. And most people with this condition reportthey have the most trouble talking on the telephone or when there isbackground noise. I can speak normally alone, but not around others.That makes it sound like a social anxiety problem, but it’s really justa different context, because I could easily sing to those same people.

I stopped getting the Botox shots because although they allowed meto talk for a few weeks, my voice was too weak for public speaking. Soat least until the fall speaking season ended, I chose to maximize myonstage voice at the expense of being able to speak in person.

My family and friends have been great. They read my lips as bestthey can. They lean in to hear the whispers. They guess. They put upwith my six tries to say one word. And my personality is completelyaltered. My normal wittiness becomes slow and deliberate. And often,when it takes effort to speak a word intelligibly, the wrong word comesout because too much of my focus is on the effort of talking instead ofthe thinking of what to say. So a lot of the things that came out of mymouth frankly made no sense.

To state the obvious, much of life’s pleasure is diminished when you can’t speak. It has been tough.

But have I mentioned I’m an optimist?

Just because no one has ever gotten better from Spasmodic Dysphoniabefore doesn’t mean I can’t be the first. So every day for months andmonths I tried new tricks to regain my voice. I visualized speakingcorrectly and repeatedly told myself I could (affirmations). I usedself hypnosis. I used voice therapy exercises. I spoke in higherpitches, or changing pitches. I observed when my voice worked best andwhen it was worst and looked for patterns. I tried speaking in foreignaccents. I tried “singing” some words that were especially hard.

My theory was that the part of my brain responsible for normalspeech was still intact, but for some reason had become disconnectedfrom the neural pathways to my vocal cords. (That’s consistent with anyexpert’s best guess of what’s happening with Spasmodic Dysphonia. It’ssomewhat mysterious.) And so I reasoned that there was some way toremap that connection. All I needed to do was find the type of speakingor context most similar – but still different enough – from normalspeech that still worked. Once I could speak in that slightly differentcontext, I would continue to close the gap between thedifferent-context speech and normal speech until my neural pathwaysremapped. Well, that was my theory. But I’m no brain surgeon.

The day before yesterday, while helping on a homework assignment, Inoticed I could speak perfectly in rhyme. Rhyme was a context I hadn’tconsidered. A poem isn’t singing and it isn’t regular talking. But forsome reason the context is just different enough from normal speechthat my brain handled it fine.

Jack be nimble, Jack be quick.
Jack jumped over the candlestick.

I repeated it dozens of times, partly because I could. It waseffortless, even though it was similar to regular speech. I enjoyedrepeating it, hearing the sound of my own voice working almostflawlessly. I longed for that sound, and the memory of normal speech.Perhaps the rhyme took me back to my own childhood too. Or maybe it’sjust plain catchy. I enjoyed repeating it more than I should have. Thensomething happened.

My brain remapped.

My speech returned.

Not 100%, but close, like a car starting up on a cold winter night.And so I talked that night. A lot. And all the next day. A few times Ifelt my voice slipping away, so I repeated the nursery rhyme and tunedit back in. By the following night my voice was almost completelynormal.

When I say my brain remapped, that’s the best description I have.During the worst of my voice problems, I would know in advance that Icouldn’t get a word out. It was if I could feel the lack of connectionbetween my brain and my vocal cords. But suddenly, yesterday, I feltthe connection again. It wasn’t just being able to speak, it wasKNOWING how. The knowing returned.

I still don’t know if this is permanent. But I do know that for oneday I got to speak normally. And this is one of the happiest days of mylife.

But enough about me. Leave me a comment telling me the happiestmoment of YOUR life. Keep it brief. Only good news today. I don’t wantto hear anything else.



Voice Update (January 14, 2007): Original Post, Archive

Voice Update

No jokes today on “serious Sunday.”

Many of you asked about my voice. As I’ve explained in this blog,about two years ago I suddenly acquired a bizarre and exotic voiceproblem called a spasmodic dysphonia. I couldn’t speak for about 18months unless I was on stage doing my public speaking, or alone, orsinging. The rest of the time my vocal cords would clench and I couldbarely get out a word.

Other people with this condition report the same bizarre symptoms.We can also often speak perfectly in funny British accents but not inour own voices. We can speak after we have laughed or yawned. Sometimesit helps to pinch our noses or cover our ears. I found I can talk okayif I stretch my head back and look at the ceiling or close my eyes. Andwe can all sing and hum just fine.

It looks like a whacky mental problem, except that it comes onsuddenly and everyone has a similar set of symptoms regardless of theirpsychological situation at the time. (It’s not as if we all have postpartem depression or just got back from war.)

The only widely-recognized treatment involves regular Botox shotsthrough the front of the neck and directly into the vocal cords. Butbecause the Botox takes some time to reach full impact, thenimmediately starts to wear off, you only have your best voice abouthalf of that time. And the shots themselves are no picnic. I was hopingfor a better solution, especially since I couldn’t do my publicspeaking after Botox injections because it weakened my voice too muchto project on stage.

One day, long after the last Botox shot had worn off, I wasrepeating a nursery rhyme at home. I found that I could speak a poemfairly well even though I couldn’t speak a normal sentence. Suddenlysomething “clicked” in my brain and I could speak perfectly. Just likethat. It was amazing.

[Note: I doubt the choice of poem had anything to do with it, but it was Jack Be Nimble.]

Many of you asked if it lasted. It did last, for several days. ThenI got a cold, my throat got funky, I had to speak different because ofthe cold, and lost it. After the cold wore off, it took a few weeks toget back to my current “okay” voice.

At the moment I can speak okay most of the time in quietconversation. In other words, if there is no background noise, I cantalk almost as if I never had the problem. That’s a HUGE improvementover the past.

But I still can’t speak in noisy environments. That’s common withthis condition, and it has nothing to do with the need to speak loudlyto talk over the noise. It has something to do with the outside soundcoming into my brain and somehow disabling my speech function. If Icover my ears, I can speak almost normally.

Unfortunately for me, the world is a noisy place. So outside ofconversations with my family at home, I still can’t have a normalconversation.

Today I am flying to Los Angeles to spend a week with Dr. MortonCooper. He claims to be able to cure this problem completely – in manyif not most cases – using his own brand of intensive voice exercisesand feedback. I’ve communicated directly with several people who saythat he did indeed fix their voices. The medical community’s reactionto his decades of curing this problem is that they say each of hiscures is really just a case of a person who was misdiagnosed in thefirst place, since spasmodic dysphonia is incurable BY DEFINITION. Butmany of his cures have involved patients referred by the topspecialists in the field of spasmodic dysphonia. So if they are allmisdiagnosed, that would be a story in itself. Maybe I’m lucky enoughto be misdiagnosed too.

I’m not sure how much blogging I will be able to do this week. I’lllet you know at the end of the week just how it went. It’s not a suddencure, and would involve continued voice exercises to speak in the"correct" way, but I am told to expect significant progress after aweek.

Wish me luck.



Voice Update [2] (January 21, 2007): Original Post, Archive

Voice Update

Asregular readers know, about two years ago I lost my ability to speak.The problem is called spasmodic dysphonia (SD). This update isprimarily for the benefit of the other people with SD. Many of youasked about my experience and for any advice. The rest of you will findthis post too detailed. Feel free to skip it.

First, some background.

There are two types of spasmodic dysphonia.

Adductor: The vocal cords clench when you try to speak, causing a strangled sound. (That is my type.)

Abductor: The vocal cords open when you try to speak, causing a breathy whisper.

You can get more complete information, including hearing voiceclips, at the National Spasomodic Dysphonia Association (NSDA) website: http://www.dysphonia.org/

The NSDA site describes the two medical procedures that are recommended by medical doctors:

1. Botox injections to the vocal cords, several times per year for the rest of your life.

2. Surgery on the vocal cords – a process that only works sometimes and has the risks of surgery.

What you won’t find at that site is information about Dr. MortonCooper’s method of treating spasmodic dysphonia, using what he callsDirect Voice Rehabilitation. I just spent a week with Dr. Cooper. Dr.Cooper has been reporting “cures” of this condition for 35 years. He’sa PH.d, not MD, and possibly the most famous voice doctor in the world.

According to Dr. Cooper, the NSDA receives funding from Allergan,the company that sells Botox. Dr. Cooper alleges, in his newself-published book, CURING HOPELESS VOICES, that Allergan’s deeppockets control the information about spasmodic dysphonia, ensuringthat it is seen as a neurological condition with only one reliabletreatment: Botox. I have no opinion on that. I’m just telling you whatDr. Cooper says.

Botox shots are expensive. Your health insurance would cover it, butI heard estimates that averaged around $2,500 per shot. I believe itdepends on the dose, and the dose varies for each individual. Eachperson receiving Botox for spasmodic dysphonia would need anywhere from4 to 12 shots per year. Worldwide, Dr. Cooper estimates that millionsof people have this condition. It’s big money. (The “official”estimates of people with SD are much lower. Dr. Cooper believes thoseestimates are way off.)

I have no first-hand knowledge of Allergan’s motives or activities.I can tell you that Botox worked for me. But it only gave me a “good”voice about half of the time. Individual results vary widely. Evenindividual treatments vary widely. I think I had about 5 treatments.Two were great. Two were marginal. One didn’t seem to help much. Andthe shots themselves are highly unpleasant for some people (but notvery painful).

I’ve heard stories of people who feel entirely happy with Botox. Forthem, it’s a godsend. And I’ve heard stories of people who had okayresults, like mine. Dr. Cooper says that people with the abductor typeof dysphonia can be made worse by Botox. I know one person with theabductor type who lost his voice completely after Botox, buttemporarily. Botox wears off on its own. It’s fairly safe in that sense.

I can tell you that Dr. Cooper’s method worked for me, far betterthan Botox. (More on that later.) And you can see for yourself that theNSDA web site doesn’t mention Dr. Cooper’s methods as an option. Itdoesn’t even mention his methods as something that you should avoid.It’s conspicuous in its absence.

Dr. Cooper claims that spasmodic dysphonia is not a neurologicalproblem as is claimed by the medical community. He claims that it iscaused by using the voice improperly until you essentially lose theability to speak correctly. Most people (including me) get spasmodicdysphonia after a bout with some sort of routine throat problem such asallergies or bronchitis. The routine problem causes you to strain yourvoice. By the time the routine problem is cleared up, you’ve solidifiedyour bad speaking habits and can’t find your way back. Dr. Cooper’smethods seek to teach you how to speak properly without any drugs orsurgery.

Some people get spasmodic dysphonia without any obvious trigger. Inthose cases, the cause might be misuse of the voice over a long periodof time, or something yet undiscovered.

Botox Versus Dr. Cooper
-------------------------------

Botox worked for me. It was almost impossible for me to have aconversation, or speak on the phone, until I got my first Botox shot.

But I had some complaints with the Botox-for-life method:

1. Botox made my voice functional, but not good. There was anunnatural breathiness to it, especially for the week or two after theshot. And the Botox wore off after several weeks, so there was always aperiod of poor voice until the next shot.

2. It looked as if I would need up to ten shots per year. That’s tenhalf days from my life each year, because of travel time. And the dreadof the shot itself was always with me.

3. The shots aren’t physically painful in any meaningful way. Butyou do spend about a minute with a needle through the front of yourthroat, poking around for the right (two) place in the back of yourthroat. Your urges to cough and swallow are sometimes overwhelming, andthat’s not something you want to do with a needle in your throat.(Other people – maybe most people – handle the shots without muchproblem.)

4. I couldn’t do public speaking with my “Botox voice.” It was tooweak to project on stage. People with spasmodic dysphonia can oftensing and act and do public speaking without symptoms. That was mysituation. Public speaking is a big part of my income.

I used Botox to get through the “I do” part of my wedding in July of2006. Then I took a break from it to see if I could make any gainswithout it. My voice worsened predictably as the last Botox shot woreoff. But it stopped getting worse at a “sometimes okay, often bad”level that was still much better than the pre-Botox days.

I could speak almost perfectly when alone. I could speak well enoughon stage. I could sing. About half of the time I could speak okay onthe phone. In quiet conversations I was okay most of the time. But Icould barely speak at all if there was any background noise.

Do you know how often you need to talk in the presence of backgroundnoise? It’s often. And it wasn’t just a case of trying to speak overthe noise. There’s something mysterious about spasmodic dysphonia thatshuts off your ability to speak if there is background noise.

As I wrote in a previous post, one day I was practicing my speakingwith a nursery rhyme at home. Something happened. My normal voicereturned. It happened suddenly, and it stuck. The media picked up thestory from my blog and suddenly it was national news.

My voice stayed great until I caught a cold a few weeks later. Thecold changed my speaking pattern, and I regressed. With practice, Ibrought it back to the point where I could have quiet conversations.But I was still bedeviled by background noise and sometimes the phone.Despite my lingering problems, it was still amazing that anyone withspasmodic dysphonia would have that much of a spontaneous recovery.I’ve yet to hear of another case. But it wasn’t good enough.

After the media flurry, I got a message from Dr. Cooper. He listenedto me on the phone, having an especially bad phone day, and he said hecould help. I listened to his spiel, about how it’s not really aneurological problem, that he’s been curing it for years, and that themedical community is in the pocket of Allergan.

Dr. Cooper is what can be described as a “character.” He’s 75, has adeep, wonderful voice, and gives every impression of being a crackpotconspiracy theorist. His price was $5K per week, and he reckoned frommy phone voice that I needed at least a week of working with him, witha small group of other spasmodic dysphonia patients. Two weeks of workwould be better. (The hardcore cases take a month.) I would have to flyto LA and live in a nearby hotel for a week. So it’s an expensiveproposition unless you can get your insurance to pay for it. (Sometimesthey do if you have a referral from a neurologist.)

Needless to say, I was skeptical. Dr. Cooper sent me his DVD thatshows patients before and after. I still wasn’t convinced. I asked forreferences. I spoke with a well-known celebrity who said Dr. Cooperhelped him. I heard by e-mail from some other people who said Dr.Cooper helped them.

You can see video of before and after patients on his web site at: http://www.voice-doctor.com/

I figured, What the hell? I could afford it. I could find a week. If it didn’t work after a few days, I could go home.

With Dr. Cooper’s permission, I will describe his theory and his treatment process as best I can.

THEORY
------------

People with spasmodic dysphonia (SD) can’t hear their own voicesproperly. Their hearing is fine in general. The only exception is theirown voices. In particular, SD people think they are shouting when theyspeak in a normal voice. I confirmed that to be true with me. I neededthree other patients, Dr. Cooper, a recording of me in conversation,and my mother on the telephone to tell me that I wasn’t shouting when Ispeak normally. It has something to do with the fact that I hear my ownvoice through the bones in my head. In a crowded restaurant, if I speakin a voice to be heard across the table, I am positive it can be heardacross the entire restaurant.  Most SD patients have this illusion.

People with SD speak too low in the throat, because society gives usthe impression that a deep voice sounds better. Our deep voice becomesso much a part of our self image and identity that we resist speakingin the higher pitch that would allow us to speak perfectly. Moreover,DS people have a hugely difficult time maintaining speech at a highpitch because they can’t hear the difference between the higher andlower pitch. Again, this is not a general hearing problem, just aproblem with hearing your own voice. I confirmed that to be true withme. When I think I am speaking like a little girl, it sounds normalwhen played back on a recording.

(People with abductor SD are sometimes the opposite. They speak attoo high a pitch and need to speak lower. That doesn’t seem to be asocietal identity thing as much as a bad habit.)

Since SD people can’t “hear” themselves properly, they can’t speakproperly. It’s similar to the problem that deaf people have, but adifferent flavor. As a substitute for hearing yourself, Dr. Cooper’svoice rehabilitation therapy involves intensive practice until you can“feel” the right vibration in your face. You learn to recognize yourcorrect voice by feel instead of sound.

People with SD breathe “backwards” when they talk. Instead ofexhaling normally while talking, our stomachs stiffen up and we stopbreathing. That provides no “gas for the car” as Dr. Cooper is fond ofsaying. You can’t talk unless air is coming out of your lungs. Iconfirmed this to be true for all four patients in my group. Each of usessentially stopped breathing when we tried to talk.

The breathing issue explains to me why people with SD can oftensing, or in my case speak on stage. You naturally breathe differentlyin those situations.

DR. COOPER’S METHOD
----------------------------------

He calls it Direct Voice Rehabilitation. I thought it was a fancymarketing way of saying “speech therapy,” but over time I came to agreethat it’s different enough to deserve its own name.

Regular speech therapy – which I had already tried to some degree –uses some methods that Dr. Cooper regards as useless or even harmful.For example, a typical speech therapy exercise is to do the “glottalfry” in your throat, essentially a deep motorboat type of sound. Dr.Cooper teaches you to unlearn using that part of the throat forANYTHING because that’s where the problem is.

Regular speech therapy also teaches you to practice the sounds thatgive you trouble. Dr. Cooper’s method involves changing the pitch andbreathing, and that automatically fixes your ability to say all sounds.

To put it another way, regular speech therapy for SD involvespractice speaking with the “wrong” part of your throat, according toDr. Cooper. If true, this would explain why regular speech therapy iscompletely ineffective in treating SD.

Dr. Cooper’s method involves these elements:

1. Learning to breathe correctly while speaking
2. Learning to speak at the right pitch
3. Learning to work around your illusion of your own voice.
4. Intense practice all day.

While each of these things is individually easy, it’s surprisinglyhard to learn how to breathe, hit the right pitch, and think at thesame time. That’s why it takes anywhere from a week to a month ofintense practice to get it.

Compare it to learning tennis, where you have to keep your eye onthe ball, use the right stroke, and have the right footwork.Individually, those processes are easy to learn. But it takes a longtime to do them all correctly at the same time.

NUTS AND BOLTS
-------------------------

I spent Monday through Friday, from 9 am to 2 pm at Dr. Cooper’soffice. Lunchtime was also used for practicing as a group in a noisyrestaurant environment. This level of intensity seemed important to me.For a solid week, I focused on speaking correctly all of the time. Idoubt it would be as effective to spend the same amount of time in onehour increments, because you would slip into bad habits too quickly inbetween sessions.

Dr. Cooper started by showing us how we were breathing incorrectly.I don’t think any of us believed it until we literally put hands oneach others’ stomachs and observed. Sure enough, our stomachs didn’tcollapse as we spoke. So we all learned to breathe right, firstsilently, then while humming, and allowing our stomachs to relax on theexhale.

The first two days we spent a few hours in our own rooms humminginto devices that showed our pitch. It’s easier to hum the right pitchthan to speak it, for some reason. The point of the humming was tolearn to “feel” the right pitch in the vibrations of our face. To findthe right pitch, you hum the first bar of the “Happy Birthday” song.You can also find it by saying “mm-hmm” in the way you would say ifagreeing with someone in a happy and upbeat way.

The patients who had SD the longest literally couldn’t hum at first. But with lots of work, they started to get it.

Dr. Cooper would pop in on each of us during practice and remind usof the basics. We’d try to talk, and he’d point out that our stomachsweren’t moving, or that our pitch was too low.

Eventually I graduated to humming words at the right pitch. I didn’tsay the words, just hummed them. Then I graduated to hum-talking. Iwould hum briefly and then pronounce a word at the same pitch, as in:

mmm-cow
mmm-horse
mmm-chair

We had frequent group meetings where Dr. Cooper used a 1960s vintagerecorder to interview us and make us talk. This was an opportunity forus all to see each other’s progress and for him to reinforce thelessons and correct mistakes. And it was a confidence booster becauseany good sentences were met with group compliments. The confidencefactor can’t be discounted. There is something about knowing you can dosomething that makes it easier to do. And the positive feedback made ahuge difference. Likewise, seeing someone else’s progress made yourealize that you could do the same.

When SD people talk, they often drop words, like a bad cell phoneconnection. So if an SD patient tries to say, “The baby has a ball,” itmight sound like “The b---y –as a –all.” Dr. Cooper had two tricks forfixing that, in addition to the breathing and higher pitch, which takescare of most of it.

One trick is to up-talk the problem words, meaning to raise yourpitch on the syllables you would normally drop your pitch on. In yourhead, it sounds wrong, but to others, it sounds about right. Forexample, with the word “baby” I would normally drop down in pitch fromthe first b to the second, and that would cause my problem. But if Ispeak it as though the entire word goes up in pitch, it comes out okay,as long as I also breathe correctly.

Another trick is humming into the problem words as if you arethinking. So when I have trouble ordering a Diet Coke (Diet is hard tosay), instead I can say, “I’ll have a mmm-Diet Coke.” It looks like I’mjust pausing to think.

Dr. Cooper invented what he calls the “C Spot” method for findingthe right vocal pitch. You put two fingers on your stomach, just belowthe breastbone, and talk while pressing it quickly and repeatedly, likea fast Morse code operator. It sort of tickles, sort of relaxes you,sort of changes your breathing, and makes you sound like you aresitting on a washing machine, e.g. uh-uh-uh-uh. But it helps you findyour right pitch.

Dr. Cooper repeats himself a lot. (If any of his patients arereading this, they are laughing at my understatement.) At first itseems nutty. Eventually you realize that he’s using a Rasputin-likeapproach to drill these simple concepts into you via repetition. Ican’t begin to tell you how many times he repeated the advice to speakhigher and breathe right, each time as if it was the first.

Eventually we patients were telling each other to keep our pitchesup, or down. The peer influence and the continuous feedback wereessential, as were the forays into the noisy real world to practice.Normal speech therapy won’t give you that.

Toward the end of the week we were encouraged to make phone callsand practice on the phone. For people with SD, talking on the phone isvirtually impossible. I could speak flawlessly on the phone by the endof the week.

RESULTS
-------------

During my week, there were three other patients with SD in thegroup. Three of us had the adductor type and one had abductor. Onepatient had SD for 30 years, another for 18, one for 3 years, and I hadit for 2. The patients who had it the longest were recommended for aone month stay, but only one could afford the time to do it.

The patient with SD for 3 years had the abductor type and spoke in ahigh, garbled voice. His goal was to speak at a lower pitch, and by theend of the week he could do it, albeit with some concentration. It wasa huge improvement.

The patient with SD for 30 years learned to speak perfectly whenevershe kept her pitch high. But after only one week of training, shecouldn’t summon that pitch and keep it all the time. I would say shehad a 25% improvement in a week. That tracked with Dr. Cooper’sexpectations from the start.

The patient with SD for 18 years could barely speak above a hoarsewhisper at the beginning of the week. By the end of the week she couldoften produce normal words. I’d say she was at least 25% better. Shecould have benefited from another three weeks.

I went from being unable to speak in noisy environments to beingable to communicate fairly well as long as I keep my pitch high. Andwhen I slip, I can identify exactly what I did wrong. I don’t know howto put a percentage improvement on my case, but the difference is lifechanging. I expect continued improvement with practice, now that I havethe method down. I still have trouble judging my own volume and pitchfrom the sound, but I know what it “feels” like to do it right.

Dr. Cooper claims decades of “cures” for allegedly incurable SD, andoffers plenty of documentation to support the claim, including video ofbefore-and-afters, and peer reviewed papers. I am not qualified tojudge what is a cure and what is an improvement or a workaround. Butfrom my experience, it produces results.

If SD is a neurological problem, it’s hard to explain why people canrecover just by talking differently. It’s also hard to understand howbronchitis causes that neurological problem in the first place. Sowhile I am not qualified to judge Dr. Cooper’s theories, they do passthe sniff test with flying colors.

And remember that nursery rhyme that seemed to help me the firsttime? Guess what pitch I repeated it in. It was higher than normal.

I hope this information helps.


Pages: 1 ... 27 28 [29] 30