Monday 30 November 2009

Digital Asset Management Foundation - Coffee Meet-Up - Notes and Audio

In my last blog post I mentioned I was taking part in an informal 'meet-up' to discuss Digital Asset Management (DAM). I made some rough notes during the call, which I hope will serve to give a flavour of the discussions:

Topics:
  • The need to broaden the understanding of DAM.
  • The need to share experiences and challenges in DAM.
  • The need to connect with clients, understand needs and deliver targeted solutions.
  • Creating metadata and vocabularies to support assets: images and video.
  • Applying metadata to image and video assets - manual, automatic and semi-automatic solutions.
  • DAM solutions: 'software as a service' versus 'enterprise solutions'.
  • Creating Vision Statements for DAM.
  • The phases of DAM.
  • DAM return on investment: key task analysis, baselining and measuring outcomes.
  • Controlled vocabularies for DAM - license to kick start development, then develop and customise.
  • Using consultancy to support DAM creation and utilisation.
  • Working with legacy data in DAM systems.
  • Harvesting metadata from creators and suppliers.
  • Adding value through manual tagging of assets.
  • Tagging assets using: external sources - off-shore or local, or in-house resources.
  • Video processing: soundtrack indexing, scene and key recognition.
For those who want to listen to the conversation you're free to do so by visiting the following URL:

DAM Foundation - Audio Track of Coffee Meetup 27 Nov 2009

The audio is a little broken up at the start, but stick with it, it gets better. Also, time delays between the US and UK means it sounds as if the speakers are talking over each other.

Speakers were:
  • Nigel Cliffe, Managing Director at Cliffe Associates Ltd
  • Ian Davis, Taxonomy Delivery Manager, Outside Americas, Dow Jones Client Solutions
  • Henrik de Gyor, Digital Asset Manager at K12 Inc
I hope you all enjoy the conversation, we hope to arrange more in a few weeks.

Ian

Friday 27 November 2009

Digital Asset Management and Metadata for Images and Video

Missing out on the recent Photo Metadata Conference - http://bit.ly/6PlLJj - has reminded me how much I love working in the DAM world, in particular in the area of creating metadata and controlled vocabularies to support digital image and video search and browse.

Reading about the Photo Metadata Conference programme
it seems like there were some great presentations. I downloaded them all, they're available from the conference website, and had great fun going through all the excellent experiences, comments and ideas.

I wish I'd been there for Madi Solomon's keynote on the collapse of boundaries in the digital world. I agree that it's less and less about what format an asset is in and more about what that asset is, and how it needs to be organised to support its use.

Assets need to work for their places in the world. Finding them and using them needs to be simpler, and metadata and controlled vocabularies need to support and enable this.

Understanding the assets an organization has, analysing the needs of that organisation, and ensuring they have what they need and that each asset is organised to support its use, is where the really exciting and satisfying work is for me.

After having worked for Corbis from 1991 to 1999, in the early research and development days of digital image organisation and sale, I was excited to see Max Wieberneits presentation on still and video metadata.

Video and still images have much in common. I've blogged about this in the past and it's still a big area for me. Both asset types have technical metadata, depicted content metadata and aboutness metadata, to name but a few. Add to this the sound tracks for video - which can be indexed for retrieval, and the ability to segment video into scenes and key frames, and you have an exciting mix of metadata across both formats.

I agree with Max that using established metadata systems makes a huge amount of sense, as does working to get as much metadata as possible from the creators or custodians of images and video - it's much easier to capture metadata early on in the creation process than down the line, and some metadata will be lost if you leave its capture too late.

As Max says, one key concern for image and video asset metadata is the users of the assets. Different people have different needs and need different metadata. For many people a good level of access to video can be built using initial metadata associated with the videos, key scene and frame analysis and the indexing of the audio tracks of the videos. Whereas for others, access to the mood of the video may only come through music analysis, lack of noise at key moments, and manually applied subject tags.

On the image side, as Max says, editorial users have somewhat differing needs to commercial users of stock photos. Max showed a great slide listing a long set of conceptual keywords: 'comfortable, dreaming, luxury, spoiled' etc. I remember the fun we had creating these concepts, arranging them in hierarchies, providing synonyms for them, and creating definitions and application rules to control how they're assigned. It sounds easy, but trying to accurately use a concept like, "spoiled" or "luxury" often brings many challenges.

I've already touched on the needs of video users, and some of the basic ways video can be organised. It was great to read Lionel Faucher's piece on how a video agency uses metadata. Video is easier than still images to work with, automated solutions are more applicable to video and much more successful, but challenges still abound, as Lionel clearly shows in his presentation.

One of the interesting topics I've been following for a while is the metadata being generated from digital cameras, and the work being done to make more use of it. Related to this is the exciting area of geographic coordinate metadata, which is created by some digital cameras when a photo is taken, and the uses to which that can be put.

Two presentations in the area of geography and image metadata were given by Bern Beuermann
, and Ross Purves. A great research area was mentioned by Bernd - the taking of GPS co-ordinates and linking them to points of interest that are within a certain range of a GPS location. This can make the tagging of images with key depicted buildings, or topography a little easier and will produce many advantages for image tagging and retrieval..

A couple of things that I'm interested in were missing from the conference. I'd have liked to have seen more on: working with video soundtracks, automatic scene and frame analysis, and the place of manually applied tags in video indexing. I'd also like to have seen more about the creation of hybrid image retrieval systems that bring together content based image retrieval with controlled vocabulary and folksonomy tags. Maybe that's all for next year!

There also seemed to have been a big emphasis on technology, file formats, and metadata standards - in many ways the building blocks or key tools for organising and providing access to video and image content. What I'd have liked to see more of is the uses to which these building blocks have been put, the real world sharing of user needs and the challenges of actually making the technology and the supporting structures work to achieve business aims.

I should end by thanking the organisers of the event, and the presenters, for putting so many presentations online - it's very helpful and refreshing to have such a good level of access to this form of content.

One way in which I keep involved in the image and video world is through my involvement in the DAM Foundation on Linkedin. There is a coffee meet-up organised for this afternoon, which I hope will kick start a lot of exciting developments. I'll post more about the outcome of the meeting next week.

Ian