Transfer ORM with Mark Mandel

iecfug_logo4.gif

I have been working long and hard on a project at work (yes i have a realy job outside of working on notpop and related sites)
I’m lucky enough to work with Luis Majano, the creator of the ColdBox Coldfusion Application Framework. Working with ColdBox has really increased the performance of the site, and brought the company fully into object oriented ColdFusion programming.

Along with Coldbox, we have really relied on Transfer-ORM to handle 99% of our database operations. Again, a huge improvement over what we were doing in the past, and a great time saver also. Transfer-ORM was created by Mark Mandel.

I’m a member of the Inland Empire Coldfusion users group, and last Friday we had our month meeting, held by Luis Majano, with Mark Mandel as the guest speaker, talking about Transfer. It was a really great presentation, with a lot of great information.

We recorded the presentation through adobe connect, so its available online. If you are interested in taking your coldfusion database operations to the next level, I suggest you check out Mark Mandels presentation, I learned a lot, I’m sure you will also.

check it out: https://admin.adobe.acrobat.com/_a200985228/p10885427

Amazon S3 + PHP = Awesome image hosting solution

Recently I have been looking for a solution to allow users of notpopular.com to have photo albums of unlimited size. Its a popular trend in social networks, and its way cooler then the 5 tiny images that are allowed on notPopular.com (1.0)

There are many technical problems that associated with having this kind of open system. One of the biggest problems is storage (disk space), and load on the web server. Each image takes up space on the hard drives, and requires attention from the web server when an image is requested, taking attention and CPU from other more important processes.

After looking at several options, I came across Amazon S3 (Simple Storage Service). Amazon S3 is an online storage web service offered by Amazon Web Services. It provides unlimited storage through a simple web services interface. Data can be easily stored and retrieved at any time, from anywhere on the web. Amazon charges in proportion to the amount of data stored and applies charges for sending and receiving data. A perfect solution for what I’m looking to do!

S3 would store the data, and would handle the load of the requests coming from users, allowing my web server to focus on more important tasks like PHP processing and MySQL queries.

Here is the price break down for amazon S3 usage:

  • Storage
    • $0.15 per GB-Month of storage used
  • Data Transfer
    • $0.10 per GB – all data transfer in
    • $0.18 per GB – first 10 TB / month data transfer out
    • $0.16 per GB – next 40 TB / month data transfer out
    • $0.13 per GB – data transfer out / month over 50 TB
    • $0.01 per 1,000 PUT or LIST requests
    • $0.01 per 10,000 GET and all other requests

Using the calculator tool they have, and looking at the notpop data and projected traffic, my monthly costs would be around $20 a month, and even if things doubled in time, it would still only be $40 a month. Very reasonable for the speed and reliability that a company like Amazon offers.

I found some great php classes to work with S3, particilarly one from Geoffrey P. Gaudreault over at neurofuzzy.net

I am also planing on using some CNAME records on the notPop server to mask the amazon URL. Should be easy to do.

Last night I started to sew Amazon S3 into the pending notPop 2.0 code base. I will make some follow up posts about how it goes once I really start using it.

cpanel broke mysqlhotcopy 1.22 but here is a fix!

mysql-logo.gif
Yesterday I logged into my server to back up my databases. I back up the mysql databases using a tool called “mysqlhotcoy”. It’s a handy perl application that copies the database files for easy restores, and it also works really quick when you run it.

I ran the mysqlhotcopy command as root, and was greeted with an error similar to this:
Invalid db.table name 'foo.foo`.`bar'

um, no…. this table exists

I have cpanel installed on my server, a lot of server do. cpanel does updates to many application, mysqlhotcopy is one of them. It looks like my version of mysqlhotcopy was updated to 1.22, and there is some major problems with mysqlhotcopy 1.22.

I did some googeling and I found some people talking about the problem. I even found a quick patch for the problem. The problem comes from a host adding the username and an underscore before a database name (example: “username_databasename.table“). mysqlhotcopy only looks for databasename.table

The following patch instructions will fix mysqlhotcopy 1.22

  • Open the perl script for editing. It is located at /usr/bin/mysqlhotcopy
  • find the following subroutine

    sub get_list_of_tables {
    my ( $db ) = @_;

    my $dbh = DBI->connect(“dbi:mysql:${db}${dsn};mysql_read_default_group=mysqlhotcopy”,
    $opt{user}, $opt{password},
    {
    RaiseError => 1,
    PrintError => 0,
    AutoCommit => 1,
    });

    my @dbh_tables = eval { $dbh->tables() };
    $dbh->disconnect();
    return @dbh_tables;
    }

  • look for this line (mine was link 821):

    my @dbh_tables = eval { $dbh->tables() };

  • immediately after that line add the following:

    map { s/^.*?\.//o } @dbh_tables

Here is my patched subroutine:

sub get_list_of_tables {
my ( $db ) = @_;

my $dbh = DBI->connect(“dbi:mysql:${db}${dsn};mysql_read_default_group=mysqlhotcopy”,
$opt{user}, $opt{password},
{
RaiseError => 1,
PrintError => 0,
AutoCommit => 1,
});

my @dbh_tables = eval { $dbh->tables() };
map { s/^.*?\.//o } @dbh_tables;
$dbh->disconnect();
return @dbh_tables;
}

After I applied the patch, everything was back to working order.

Some people have other approaches that would work also, like downgrading mysqlhotcopy all together.

Personally, I think adding one line of code wasn’t that big of a deal to fix the program

experiments with MySQL FULLTEXT searching

I just read a great article on FULLTEXT searching in a MySQL database
(http://www.onlamp.com/pub/a/onlamp/2003/06/26/fulltext.html)

I took a few minutes to run some of my own test using two of the largest text fields on the notPopular.com user profile table. I wanted to check out the results of a FULLTEXT search versus a LIKE query using real data in my database.

First I did a search using the LIKE clause looking for “xdeathstarx”


SELECT *
FROM profiles
WHERE approved =1
AND
(interests_general LIKE '%xdeathstarx%'
OR
interests_music LIKE '%xdeathstarx%')

Showing rows 0 – 29 (119 total, Query took 0.0763 sec)

The query was fast, and took under a second to run

Next I created a FULLTEXT index on the “interests_general” and “interests_music” fields. It was really easy to do.


SELECT *
FROM profiles
WHERE approved =1
AND MATCH (interests_general, interests_music)
AGAINST ('xdeathstarx')

Showing rows 0 – 29 (119 total, Query took 0.0016 sec)

Talk about fast! it took 0.0016 seconds

lets compare the differences:
LIKE statements : 0.0763 sec
FULLTEXT search : 0.0016 sec

hands down the FULLTEXT search won, it was about 50x faster. If that isn’t instant results, I don’t know what are. Sure there will be more disk space used on the FULLTEXT index, but disk space is cheap, CPU cycles aren’t.

This makes me want to look more into optimizing the queries I am planning on using for the ever elusive notPop 2.0