www.muttznutz.net

Underwater photography by Andy Kirkland

It all kinda comes together …

Here are two key decisions I’ve made in running the images bit of my site.

Keyword Classes

The first – taken way back when I was using Photoshop Elements and JAlbum to generate “static” albums – was to “structure” my keywords into classes. This means analysing the type of keyword, and was fundamental to linking all the images to the dive logbook software.

So – to recap – the dive number (for example) is prefixed by the characters [dn]. The value is extracted and used to get the dive site (etc.) details from a database.

I’ve also analysed the common and scientific names of the marine creatures (where I can). This let me link through to the wonderful Fishbase site.

By assigning these in the editing / photo management software (using the IPTC Keywords metadata field, for those who are interested) I can parse them and use them in smart ways. In the early days, the treatment was a bit patchy (some software just discarded them), but most image software handles them now. I use Lightroom, but Picasa can do it just as easily.

This means that I don’t have to keep typing “Amphiprion bicinctus – Twoband anemonefish” into the caption field. And the different classes can be processed (and presented) in different ways – so I can (and do) include the details from the keywords in the HTML keywords, and in the page title, for example.

php

The other key technology was to use a server-side scripting tool, and I decided to use php – I can develop it on my PC / Mac, there aren’t lots of components to be integrated, and it’s pretty well self-contained. And free.

At the back end of last year, I reloaded most of my image albums to use this technology.

The big advantage of a server-side tool is that changing the behaviour of the site can be achieved by changing a single script, rather than each individual web page. As I’ve now got several thousand images, this is an advantage.

But as well as this, each page is generated dynamically, which means that logic can be introduced in terms of the presentation (so the layout can change depending on the content) – I can use the data to “drive” the page content.

Coupled with the arcane voodoo of Apache’s mod_rewrite functionality I can make it appear as though there are individual web pages.

Search engines and photos

But the big problem with getting these photos recognised by the search engines – particularly Google – is that those engines don’t look at the metadata. In fact, the vast majority (almost all) of the weighting in Google’s image search is the title of the image.

This was brought home to me when I was catching up with the TWIP podcast, and a interview with John Pozdzides, CEO of WOOPRA. John P pointed out that to get Google love, you need to rename all of your photos to include the keywords.

Obviously, this would be a real pain, and I’d end up with lots of photos called “twoband_anemonefish.jpg” – many of them living in the same location. But, I thought, that data’s available in the metadata. And using Mod_rewrite and php, I can take a file called (for example) 115PC110016,jpg and make it appear as though the file is called “twoband_anemonefish.jpg” (taking the title from one of the keywords), living in a subfolder called 115PC110016. I can put a bit of Mod_rewrite code in to ignore the bits I don’t need, so I don’t need to physically rename any files.

And all I need to do is to change the version of scripts each album uses …

I’ll be checking to see how this impacts the number of visitors I get …

The URI to TrackBack this entry is: https://muttznutz.net/muttzblog/site-news/it-all-kinda-comes-together/trackback

RSS feed for comments on this post.


Valid XHTML 1.0 Strict