Monday 28 September 2009

Image Findability: Improving through Tags

Take a look at my recent article on Image Findability on FUMSI - bit.ly/LQ3UP

My article outlines the options open to tag images for a business need - selling, sharing, reducing duplication of effort etc. It assumes an image focused audit or assessment has already understood the creation and use of image content and the need is to choose from a set of options in order to create a tagging plan, with a set of rules, guidelines and success metrics.

Friday 25 September 2009

Need to Create Good Work Fast? Simple - Get a New Computer

I have a problem. I have six pieces of work to write in a couple of weeks and I'm under pressure. I need the work to be spot on, of the highest quality and created in the shortest space of time.

The answer to my problem? Buy a new computer.

Does this sound strange to you? Can you see how improved output comes from a new computer?

I was sceptical, but the Sales guy said a new computer was the answer. I asked him to explain and he told me how the time I was wasting messing with my old computer was at the heart of my problem. All those lost minutes fixing crashes, worrying about blue screens, battling with slow performance, scanning for adware, spyware and worse. Forget all that was the message I was getting, move to the promised land of a newer, faster computer and your problems are solved. After a bit more chat I was sold. My new computer would save me time and that extra time would be spent devoted to my key tasks, which in turn would lead to better quality work and faster work at that. Saving time was even money in the bank for me to set against the cost of the computer - so it wasn't even as expensive as I'd thought.

At this point I excused myself, had a coffee, and thought it through one more time. Did it make sense that a new computer was my solution? The light quickly dawned, of course it didn't. A new computer wasn't the solution and time saving was not my key issue. How did the Sales guy know that time saved would be time I'd actually spend on my document tasks? How did he know the processes and tasks I'd been performing with my current computer were not valuable experiences - not to be lightly ignored. Why did he make no attempt to understand me and my circumstances and simply sell me the one size fits all Sales line that so many people still hear today?

I soon realised than I'm better off assessing my goals and objectives. What is it I need to do? For whom? Why? And when? Then I need to ensure I'm prepared and enabled to achieve them. Is my broadband connection operating? Is it fast enough? Is the right software up and running? Can I access the libraries I need?

I would also benefit from improving my time planning and management skills. I need to focus on my key tasks. What is it I need to do? What problems am I having here? I also should not forget my deliverables. What do I need to produce and how do I get there?

All these areas, when addressed in the right way, will enable my tasks and improve my outcomes. Granted, this is a little harder to sell than a new computer equals better work and a wonderful life, but surely I'm worth that extra effort and it's certainly what I need to hear.

Many of us encounter this scenario frequently. How many times have you watched a Sales presentation built around saving time? Usually a calculator is involved and sometimes members of the audience are asked to volunteer key pieces of information - "How much time do you spend searching for information in a day?", "What's your hourly rate?", "How hard do you find tracking down the information you need?" "Could you be more productive if you saved some of this time?" Very often 'time saved' is then calculated and that 'time saved' directly equated to business advantage. Very often there is little or no thought put into the needs or objectives of individual businesses or any injection of common sense into the Sales pitch.

A Dow Jones information assessment looks for the real issues and pain points our clients experience, and works with them to solve their problems and enable improved outcomes. If you have an information management issue you need assistance with, speak to us and let us work with you to get to the heart of your needs. You never know you might even save enough money to afford that new computer you've always wanted!

Ian

This post first appeared at the Synaptica Central blog

Passionate Geographers

I noticed a very interesting initiative recently Project Geograph: Photograph Every Grid Square.

This project is working towards collecting and making available images depicting the geography of every square kilometre of the British Isles. This ambitious project seems to be progressing very well, with many good quality images loaded to the website.

Already over 8,900 contributors have submitted nearly 1,500,000 images, with an average of 5 images associated to each geographic square across England, Wales, Scotland and Ireland. This is a great resource, preserving in amazing detail what the British Isles looked like at the start of the 21st Century. This is also a wonderful way to learn about the geography of these amazing islands and to dig deeply into their hills, valleys, towns and villages. This is also a superb source for genealogists looking at how a particular part of the British Isles looks today.

Back in 2007 I attended the Blogs and Social Media Conference 2.0 in London. One presentation which has stayed in my mind since then, was Lee Bryant's, "Engaging with Passionates". In his exceptional presentation Lee described a ground-breaking social networking case study and talked about the energy that can be released when organisations successfully tap into a group of people who are truly passionate about a given topic.

I think you'd be hard pressed to find a better example of the power of passionates than the Geograph Project. Looking at the number of contributors, the amount of the British Isles covered, and the quality of the photography and metadata created, makes a clear point - find people who are passionate about a topic, people who are committed to a hobby or interest, engage them in the right way and they will deliver time and again.

I wish everyone associated with the Geograph Project all the luck in the world, may they stay passionate and committed to what they do, and may their project benefit from their commitment.

Oh, and if you like what you see, submit a photograph, or start a similar initiative.

Ian

This post first appeared in the Synaptica Central Blog

Report from Digital Asset Management (DAM) Conference - London, 1 July

I spent Wednesday 1st July at the Henry Stewart DAM Conference in London.

In my slot I talked about, "Tagging Images for Findability - Making Your DAM System Work for You." I used my 30 minutes to raise the issue of organising images using metadata and controlled vocabulary to connect the images to the people who want to use them. I spent a little time looking at the ways to use text to categorise images and the advantages and disadvantages that brings. I devoted a lot of the presentation to raising issues to watch out for when tagging images, in particular specificity and focus in image depictions, abstract concepts and image 'aboutness' and the deceptive simplicity of visually simple images.

A far braver presentation than mine was given by Madi Solomon. Madi ditched the PowerPoint presentation to facilitate a refreshing debate on metadata. Questions from the floor came thick and fast. Madi did a great job of presenting 'on the edge' and drew out the experiences of many of the attendees and the challenges they were facing.

Also of note at the conference was a very informative presentation from Theresa Regli on 'Evaluating and Selecting Technologies' and a stimulating piece from Mark Davey on the old chestnut of ROI and Digital Asset Management Systems. Mark took a pretty dry subject and a slot directly after a good lunch and succeeded brilliantly in making it entertaining, informative and practical. Take a look at his excellent presentation Digital Asset Management ROI - the basics. I think this is a key resource for anyone interested in return on investment in the DAM space and it's fun to watch too.

I had a great day at DAM London and I hope my fellow delegates found the presentations as helpful and enlightening as I did.

Ian

This post first appeared on the Synaptica Central blog

Report from the ISKO Content Architecture Conference - 22-23 June, London, UK

I spent Monday and Tuesday of this week at the fascinating ISKO Content Architecture Conference.

On Monday I gave a presentation on, "Still Digital Images - the hardest things to classify and find."

My presentation looked at the image market and the ways in which images can be annotated - or is that processed, classified, categorized, tagged, keyworded… We need a controlled vocabulary to controlled the vocabulary of controlled vocabulary!

I then went on to raise some of the challenges of image organization and retrieval - picking out the need to consider different image domains and user groups, and considering how to provide users with access to basic attributes, depicted content and abstract concepts linked to images.

There were some amazingly interesting presentations over the two days of this event.

Highlights for me included a great keynote from David Crystal looking at the evolution of the linguistic approach to content analysis. Madi Solomon highlighted the challenges faced by Disney and Pearson in the management of content using metadata. Charles Inskip opened my mind to music categorization and sale, and the many similarities with image retrieval and organization. Also, intriguing was the work showcased by the BBC's Tom Scott, who spoke about 'Building Coherence at bbc.co.uk'

As always at these events, interesting posters and presentations abounded, and this blog can only give a flavour of them.

If you want to know more, the organizers have made abstracts available online, and in some cases full papers. They also plan to make the slides of individual presentations available along with recorded audio. I'm told the full set of resources will be on the conference website in the next few weeks.

Next week I'm at a Digital Asset Management (DAM) conference in London talking about "Tagging Images for Findability: making your DAM system work for you." More about that next week.

Ian

This post first appeared in the Synaptica Central Blog

Classifying Images Part 3: Depicted Content

Welcome back to my occasional image classification series.

The last time I raised the topic of image classification I discussed the basic attributes of images. This time I want to focus on the thornier issue of the content, or concepts, depicted in them.

There is a danger of treating an image like a piece of text and classifying its attributes: Who created it? When? What techniques were used? Then writing a title or caption and leaving it at that. Sometimes little more need be done to a document than record this kind of information, especially with free text searching, but lots more needs to be done to most images.

Image findability

Image findability is the process of using search and browse to access the images required. A major aspect of image findability relates to the things depicted in them. Image users often search for images based on the generic things in them and also the proper names of these things. Classifying images based on depicted content means considering anything and everything that is and can be depicted in an image. When considering this I like to focus my efforts on understanding the images I'm dealing with, the users who are trying to find and work with the images, and the ways in which these people need to search and browse for the images they need. After an assessment of these areas I then tailor my approach.

Broadly speaking people searching for depicted content are looking for a number of types:
  • Places: cities, towns, villages, streets...
  • Built works: parks, skyscrapers, cottages, walls, doors, windows...
  • Topography: mountains, valleys...
  • Groups and organisations: air forces, choirs, police departments...
  • People: roles, occupations, ethnicity and nationality: mothers, doctors, Caucasians, French, Germans...
  • Actions, activities and events: running, writing, laughing, smiling, birthdays, parties, book signings, meetings...
  • Objects: a myriad of items...
  • Animals and plants: common and scientific names...
  • Anatomy and attributes of people, animals and plants: arms, legs, adults, leaves, trunks, paws, tails...
  • Depicted text shown in images - often signs or writing shown in images..
Many of these generic types can also have proper named instances:
  • Proper names of people, places, buildings, topography, organisations, animals etc
When dealing with depicted content I've found some of the biggest issues to be:
  • Identification - knowing what is in an image
  • Focus and specificity - knowing what to include and what to exclude
  • Consistency - applying the same term in the same way for the same depicted content
Identification - knowing what is in an image

Depicted content is a relatively black and white area - a dog is depicted so a dog is tagged. However, it might sound a little weird, but working out what is actually in an image can be a lot harder than you think.

Take a look at the image "Do You Know What This Is?" by Sister72
















This depicted content is fairly simple to see, but understanding what you're looking at is not that easy. Even if you know roughly what you're looking at, do you know what it's actually called?

One tip is to group similar images together when you're classifying them. Also, always start by assembling as much information as possible before you begin to classify images. It is especially important to gather together the information you have from the creator or custodians of the images.

Also important, when you have the luxury, is to get the image creator to add key metadata about the image at the point of creation, or soon after.

Focus and specificity

Knowing what to include and what to exclude, what to mention and what to ignore, is also much harder than it sounds.

Firstly, some image users will want a piece of depicted content tagged whenever it appears in an image, others will only want it tagged when the image shows a very good representation of that content, and of course many people will want something in between the two extremes.

Different users have different requirements. You need to understand the domain in which you're working and see the classification of depicted image content as supporting the needs of your users.

For example, Would you tag everything in this 'Messy Room' image?















What would you miss out and why?

Looking at the image of "Mountain Goats", from Thorne Enterprises


Would you tag this with goats as well as mountains? Would this be helpful?









Let's look at four images depicting windows:

'Window to the World'?,




















Portuguese Window'?, '















What Light Through Yonder Window Breaks'?














and

'Window'.




















Looking at these, it soon becomes clear that even deciding to apply a simple term like 'Windows' is not always easy.

Would you apply 'Windows' to the image of the cat looking out of the window? Is a window actually depicted in that image? If the image wasn't tagged with 'Windows' how else would anyone find an image of a cat looking out of a window?

The other three images show windows as parts of buildings. but is a building always depicted? Deciding when to apply a building type or the name of a building can be hard. Should you do this every time a part of a building is shown? Only when the whole building is shown? When enough of the building is visible? Or when a section of the building that to most people would represent the build is visible? For example, what part of the Empire State Building would you consider to depict that building? Rarely does anyone see it all - how much is enough? Would you treat the images of windows in a similar way and classify them all with a building type of 'Houses', or would you ignore the structure and focus on the parts - the window, the roof?

Consistency

Achieving consistent application of terms to images revolves partly around clear term definitions, well defined application rules and guidelines, and a robust quality assurance process.

Term definitions are very important. Defining the meaning of a term, and ensuring the people choosing which term to assign understand that meaning, can be crucial to term application. For example, creating a term such as 'Bow' without defining its meaning is not going to make it easy to apply.

Application rules that are well considered, thorough and clear are also very useful. Even a simple concept often needs some form of guidance linked to it. I remember a while ago needing two terms, 'Indoors' and 'Outdoors' to allow users to find images of people who were outside and inside - a simple concept you might think, one that people often need, and one that's easy to apply - who'd need guidelines for that? However, it soon became clear that guidelines were needed after I received a series of interesting questions: Is being on a train indoors? Should studio shots always be considered indoors? Does every shot of a person have to have indoors or outdoors assigned to it? If not, when should this term be used and when not? Is this a focus issue? If so, how much of a location needs to be seen before Indoors or Outdoors is used. A clear set of application guidelines followed an interesting meeting!

Strong quality assurance processes are very valuable. People make mistakes and images generate interesting issues. Appointing staff to review a percentage of classification work based on clear guidelines, and then sharing findings with the people who assigned the terms to the images, is an important way of assessing how well the image classification is progressing and keeping a classification team synchronised.

Today I’ve talked a lot about content depicted in images, next time I’ll focus on abstract concepts which are related to an images ‘aboutness’.

This post first appeared in the Synaptica Central blog

Content Based Image Retrieval - Google and Similar Image Search

I was very interested to see Google experimenting with visual similarity in still images, what I usually call Content Based Image Retrieval or CBIR.

Google Labs recently launched an image search function based on visual similarity - Google Similar Images. This new offering allows searchers to start with an initial image and then find other images that look like their example picture.

I've been reviewing these type of systems on and off since the early '90s. They've always offered much, but I never saw any evidence that the delivery matched the hype.

I've always found that using pictures instead of text to find images works best on simple 2d images: carpet patterns, trademarks, simple shapes, colours and textures. Finding objects in images was always a struggle, and looking for abstract concepts: fear, excitement, gloom, isolation, solitude.. was never been more than a vague possibility. Over the years a lot of work has been done in this area, and the search results I've seen have started to improve, but this technology is still young, and in my personal opinion still rarely delivers what most users want, need and expect.

Looking at Google Similar Images, I wonder how much of the back-end is pure content based image retrieval (CBIR), how much is using metadata in some way, and how the two are interacting? One thing that appears to be helping to often show a tight first page of results, is simply pulling the same image from different sites. I also noticed that the 'similar images' option is not available for all images - which makes me wonder why? Have some images been processed in ways that others haven't?

Diving right into the experience, I entered a query for a place in the UK and didn't see any image results with the 'Similar Images' option. I wonder whether this is to do with the presence of the results on UK websites?






I persevered, and found some interesting images and got some interesting results.

I started with a fairly standard image of a beach scene, always a favourite with testers. As you can see I got a pretty good first screen back. However, the 5th and 6th image on the top row show no sea or beach, neither do the first three images on the second row.

I moved on to an image of what looks like equipment at the top of a pole.

The results were much more mixed: studio shots of objects, fighting people, trucks etc. No images were returned that I would consider similar to the example picture.
















Interesting results came from a similarity query on a clock face. A couple of the first results hit the mark, then the results set degenerated into image similarity based more on the colour and the black background than anything else.















My last attempt, before morning coffee called, was an image of a country road. I was hoping that the clear roadway might produce a pretty precise results set. However, I was a little disappointed by what I saw.

The first results page only produced one vague road on the bottom row, with most of the similarity seemingly related to colours instead of objects.















From my less than scientific dip into this Google Labs offering, it looks like the highlighted images on the Google Similar Images home page produce good results - better results than I've seen other systems come up with. Many other image queries are sure to also produce results which may well impress. However, many of the results I saw did not match the initial level of accuracy I saw from the highlighted home page pictures.

I don't want to be picky, this is still a prototype after all, and well done to Google for introducing a wider audience to this type of image search. Hopefully, after more work, the results will increasingly make more sense to people, the access points offered to depicted content and conceptual aboutness will improve and more images will be more findable for more people.

Until that time, visual search without text will help with image findability, but text, metadata, and controlled vocabulary applied to images by people is for me still king, and will continue to offer the widest and deepest access to images for a long time to come.

Ian

This post first appeared on the Synaptica Central Blog

VideoSurf - a new way to search for video?

If you have been keeping up with my posts on this blog you won't be surprised to learn that today I spent my lunch hour exploring a video search offering that's new to me called VideoSurf. I was so interested in this new search tool that I interrupted my usual run of image indexing articles, and my lunch hour, to do some research and write up this post.

In a September press release VideoSurf claimed its computers can now, "see inside videos to understand and analyze the content." I would encourage anyone who has an interest in this area to take a look at the company's website, give it a whirl and see what they think.
Watch Vampire Videos Online - VideoSurf Video Search

In my experiences video search engines have relied on a combination of the metadata that is linked to the video clips, scene and key frame analysis, and automatic indexing of sound tracks synched with the video.

For example, sound tracks, synchronised to video content, can be transformed to text and indexed and then can be linked to sections of videos by looking for gaps in the video to identify scenes, with various techniques also used to create key frames, that attempt to represent a scene. These techniques are backed up with metadata to accompany a video clip.

If you have worked in the industry you know that video metadata is expensive to create. Most of what people see online is either harvested for free from other sources, or limited in size and scope. Such metadata may cover the title of a video clip, text describing the clip, clip length .etc. It may even include some information about the depicted content in the video or even abstract concepts which try to specify what a clip is about. Though this level of video metadata is the most time consuming and complex to create - it also offers the fullest level of access for users.

Audio tracks can be also be of great use and many information needs can be met by searching on audio in a video. There are however limitations; for example many VERY SCARY scenes have little dialogue in them, and depend heavily on camera-work and music to give the feeling of fear, how easy is it to find these scenes based on dialogue alone, or even based on 'seeing inside a video'. How can you look for 'fear' as a concept?

Content based image retrieval, looking at textures, basic shapes, and colours in still images, has yet to offer the promised revolution in image indexing and retrieval. In some contexts it works quite well, in many contexts end-users don't really see how it works at all. So adding a layer to video search that tries to analyse the actual content, pixel for pixel is an interesting development.

To my mind, a full set of access paths to all the layers of a video still demands the use of fairly extensive metadata, especially for depicted content and abstract concepts. Up to now, metadata has always been the way to find what an image, whether it's still or moving, is conceptually about, and what can be seen in individual images and videos. Even when that metadata is actually sounds, turned into text and stored in a database.

Is VideoSurf's offering really any different from what's gone before?

Is this system, which seems to be using Content-Based Image Retrieval (CBIR technology to some extent, a significant advance?

Reviewing some of the blog posts people have published it seems many others are interested in VideoSurf's offering as well.

For an initial idea as to how VideoSurf works, try taking a look at James McQuivey's OmniVideo blog post, "Video search, are we there yet?-. As James describes in the article, one pretty neat aspect of what VideoSurf can do is to match faces, enabling you to look for the same face in different videos, thus reducing the need to have the depicted person mentioned in the metadata exclusively. However, this clearly isn't much help if the person you're looking for is mentioned but not depicted, in which case indexed audio would help, or if the person is not well depicted, for example the person is only depicted from the side or the back. However, quibbles aside, if this works, then this is a pretty useful function in itself.

Here are some of the other bloggers who have be writing their thoughts on Video Surf. For example:

* An interesting post on this subject from the Rhondda's Reflections blog on Searching for videos with VideoSurf
* Phil Bradley comments on his Weblog on the VideoSurf Video Search
* And one of the the best current reviews of VideoSurf that I've found comes from Chris Sherman at SearchEngineLand.

Clearly, we're on the right track and there is a lot of interest in the opportunities and technologies around video search. However I think that there is a long way to go before detailed and automatic object recognition is of any meaningful use to people. As far as I can see, it's still not there with still or moving digital images. Metadata for me is still the 'king' of visual search. There however are a growing number of needs that automatic solutions can already resolve and a growing case for solutions that work by offering a combination of automatic computer recognition of image elements, metadata schemes and controlled vocabulary search and browse support.

I'd love to know what people think, about VideoSurf and other services that provide video search.

Ian

This post first appeared at the Synaptica Central blog

Classifying Images Part 2: Basic Attributes

I've already asked the question "What is the Hardest Content to Classify?" and promised additional posts on the subject based on my background of 13 years developing taxonomy and indexing solutions for still images libraries, so I am continuing my thoughts in this post focusing on the basic attributes of image classification.

In my opinion, images are the hardest content items to classify, but luckily for sanities sake not all image classification is equally demanding.

The easiest elements of image classification relate to what I'm going to call image attributes metadata. This area, for me, covers all the metadata about the image files themselves, rather than information describing what is depicted in images and what images are about.

Metadata aspects in this area cover many things and there are also layers to consider:

1, The original object
-- This could a statue, an oil painting, a glass plate negative, a digital original, or a photographic print

2, The second generation images
-- The archive image taken of the original object, plus any further images, cut-down image files, screen sizes, thumbnails, images in different formats, Jpeg, Tiff etc

The first thing to think about is the need to create a fully and useful metadata scheme, capturing everything you need to know to support what you need to do. This may be to support archiving and/or search and retrieval.

Then look at what data you may already have or can obtain. Analyse data for accuracy and completeness and use whatever you can. Look to the new generation of digital cameras to obtain metadata from them. Ask image creators to create basic attribute data at the time of creation.

You'll be interested in the following metadata types:

- Scanner types
- Image processing activities
- Creator names
- Creator dates
- Last modified names
- Last modified dates
- Image sizes and formats
- Creator roles - photographers, artists, sculptures
- Locations of original objects
- Locations at which second generation images were created
- Unique image id numbers and batch numbers
- Secondary image codes that may come from various legacy systems
- Techniques used in the images - grain, blur etc
- Whether the images are part of a series and where they fit in that series
- The type of image - photographic print, glass plate negative, colour images, black and white images

This data really gives you a lot of background on the original and on the various second generation images created during production. Much of this data can either be obtained freely or cheaply, lots of it will be quick and easy to grab and enter into your systems. It should also be objective and easy to check.

My next post will cover dealing with depicted content in images. Please feel free to leave comments or questions on the subject.

This post first appeared on the Synaptica Central blog

What is the Hardest Content to Classify?

A topic that came to mind, as I thought about things to blog about, is the whole area of classification of different types of content: text, sound, video and images.

I often speak to clients who have a range of item types stored in a number of repositories. They're often looking to classify new content, or to work on older content in order to improve its findability. They are always looking to get more value from their content.

In these circumstances a content audit is often called for, to answer the 'What do you have?' question. This then leads to a general discussion of the content types and the ways in which they can be classified, usually using a controlled vocabulary either applied by a machine, by a person, or by a mixture of the two.

One thing that often makes people ask me questions is my fairly frequent assertion that images are easily the hardest item types to deal with.

Why are Images the Hardest Content to Classify?

-Textual items contain text. Use of auto-categorising software, free text storage and access .etc .etc makes organising and finding textual items relatively easy.

-Sound can be digitised and turned into text.

-Video often has an audio track that can be turned into text too. Computers can be used to identify scenes. Breaking a video into scenes and linking a synched and indexed soundtrack together can provide pretty good access for many people - (though there's a whole blog post on the many access points to video that these process doesn't provide).

Images on the other hand have no text, no scenes, all you have are individual images, with the meaning and access points held in the visuals.

Some will say that this is really not a problem, all you need to do is use content based image retrieval software to identify colours, textures and shapes in your images, and you'll soon be searching for images without any manual indexing. However, whilst this technology is promising, it leaves a lot to be desired.

Today, the way to provide a wide and deep level of access to still images continues to be by using people to view images, write captions and assign keywords or tags to each image based on image 'depictions' and 'aboutness and attributes'. This manual process often requires the use of a controlled vocabulary to improve consistency and application.

However, how this indexing is done and what structures support it, will be the subject of further posts- I just wanted to get my thoughts out there !

So Stay tuned.

Ian

this blog post first appeared on the Synaptica Central blog

Author Spotlight: Ian Davis

My name is Ian Davis, and I'm a Global Project Delivery Manager working in the Dow Jones Client Solutions Taxonomy Delivery Team and based in our London office. I work to develop and deliver a range of content and information solutions for our global clients. Projects can include discovery assessments, taxonomy strategy and creation, taxonomy mapping, search support, information architecture and website development. I also assist in the marketing and deployment of the website www.taxonomywarehouse.com

My particular areas of interest include: developing taxonomies, thesauri, and metadata schemas, manual and automated indexing of still and moving images, deploying and using Synaptica controlled vocabulary software, the challenges of managing teams of geographically dispersed information workers, website creation and development, and the localisation of content into multi-lingual environments.

I joined Dow Jones in February, 2006, after 13 years developing taxonomy and indexing solutions for still images libraries at both Corbis Corporation and Photonica (formerly part of Amana Japan and now part of Getty Images).

At Corbis, I served as head of the UK division’s image cataloguing department.

At Photonica, I worked to create and implement the e-commerce website www.iconica.com and was responsible for the development of www.photonica.com. I also developed, implemented and maintained all vocabularies underpinning the classification and retrieval of Photonica's extensive digital image content. One aspect of this included creating an extensive English language thesaurus and managing the localisation of that controlled vocabulary into five European languages. I managed a team of ten still image indexers and five thesaurus developers.

After leaving Photonica, I worked as an independent consultant for BUPA in the area of metadata and taxonomy creation and development, and the implementation of an enterprise search solution.

Most of my time is currently spent working on the delivery of a range of client engagements outside the Americas. I managing a team of geographically dispersed staff who are working on the customisation of large topical thesauri and the creation of various browsable taxonomies. We also create multi-lingual thesauri.

This post first featured on the synaptica central blog -