Another OPML podcast!

It’s great to hear another podcast with loads of chat around OPML over on Alex Barnett’s blog. Present on the podcast were Alex, Adam Green, Joshua Porter and John Tropea. Maybe I’ll be organised enough to join in on the next one. ๐Ÿ˜‰
Also, check out OPMLCamp. Which sounds like it should be really interesting. Hopefully I’ll be in the area around that date.

This past week has been incredibly busy, cranking out code like a you-know-what. OPML, RSS, feeds, trees, Flash, scripts, scripts, scripts!!

Hey, if the boot fits, strap it! ๐Ÿ˜‰

Also, I noticed that Alex did a great job at annotating the podcast in the description. Now, this is perfect data to get organised in OPML with a ‘time’ attribute. I’m thinking that an OPML outline node with a ‘time’ attribute could point to an RSS formatted file/list containing the details, links and timestamps. Possibly in OPML itself? Either create something new in OPML – or use RSS in a different way, but utilising all the available elements?

Any ideas anyone?

Basically we need to think about simple ways we can organise and link files containing time-based annotation of the content in an enclosure. Thats data is so important. Audio search engines like Podzinger and Podscope could then output that data and enrich the data which points to and describes a podcast.

Then we need the tools and interfaces for all this. Mmmm… yesss… ๐Ÿ˜‰

9 comments

  • Hey Kosso, sorry you couldn’t make this one, Hope to have you on soon…

    Alex.

  • Ok, I’ve just been trying to encourage RDF people to expose RSS 2.0 and OPML so I hope you’ll forgive me for flipping in the other direction here.

    Ok, “simple ways we can organise and link files containing time-based annotation of the content in an enclosure”. First of all I’d reword that to talk of resources rather than enclosures, the target has a URI which identifies a resource on the web with a representation available in an audio format. Now an annotation is a kind of description. So what existing ways are there for describing resources? How about the Resource Description Framework..?

    Sorry for the cheap word play, but my point is that you are looking at stuff that has already had a lot of work devoted to it elsewhere. “Simple” is not inventing from scratch. Have a Google around terms like RDF, annotations, audio, MPEG7. Ok, MPEG7 is a big one, but there are lighter schemas, like the stuff used by Ontolog, and the BBC have some stuff on the way too. You might also want to check Swoogle with a few appropriate keywords.

    The advantage of using RDF is that you can mix together the information using different vocabularies in a logically consistent fashion, which means it makes sense to talk about people (FOAF) and their recordings in a way that can be machine-processed, without having to rely on text search or XPath guesswork.

    But you want OPML and/or RSS? Ok, if you insist. Create the extensions in a way which makes sense in the domain of interest, audio annotations. If these are directly mappable to an RDF expression of the same information, you stand a far better chance of sane interop. Tools like aggregators can still read the stuff, but you’re not just throwing in ill-defined, unreusable stuff.

    The way I’d start is to write down what I want to say. Then look for existing vocabularies (see above) which cover what’s needed. If you’re talking RSS, then RSS 1.0 may help. Then I’d create some examples as RDF (you don’t need to use RDF/XML to start, Turtle is a good handwriting syntax, see Ian Davis’ recent work with his Bio vocab). After putting it into RDF/XML (because the target formats are XML) and running that through the validator, I’d then look at how to fit it into RSS/OPML. This may sound longwinded, but it has the benefit of built-in sanity checking, and also means that the data you finally emit will be potentially be compatible with any Semantic Web tools.

    One final point – I’d strongly suggest given any resulting format a profile attribute somewhere, e.g.
    <outline x:profile=”http://kosso.org/podcasts”>

    The advantage of putting a URI in like that is that the data can unambiguously be interpreted against your definitions (which would probably appear at the URI on the web). This goes a little further than using namespaced elements alone, as you can’t be sure what kind of mixes people are likely to do. With a profile you can make it more controlled. The same kind trick is used around microformats – the profile URI(s) can be used to determine which XSLT you should apply to convert the data into RDF, thus mapping it unambiguously to a formal definition. (see also GRDDL)

    “Then we need the tools and interfaces for all this” – if you map to RDF there are a lot of tools available off the shelf. For interfaces – SPARQL will get you a long way.

    Go Kosso go!

  • Hi Danny,

    That’s great stuff! Thanks!

    fyi: I just left the BBC after nearly 4 years developing a whole raft of xml driven multimedia apps and systems ๐Ÿ˜‰ – ever see the big screens in the railway stations in the UK with BBC News on them? That’s my baby ๐Ÿ˜‰
    before there, I developed a php/mysql driven SMIL 1.0 publishing system. – it’s shame about how SMIL went and lack of proper support – SMIL2.0 vs HTML+TIME etc, but if there’s any way we can get that ball rolling again to help annotate and ‘orchestrate’ media in browsers (or othe clients – Flash etc) then that would be great.

    Once we have podcast.com operational, I’ll be looking very closely at all this time-based stuff, as it’s kinda my thing ๐Ÿ˜‰

    whatever format it is – and whatever events they may trigger, the chosen format would be great for podcast shownotes as well as a multimedia ‘driver’.

    ๐Ÿ˜‰

    thx for stopping by!
    K

  • Hereโ€™s a similar serviceโ€ฆ

    http://www.audibletype.com is a web based voice recognition system. The system allows for transcription of audio and video files to search engine friendly text! The system will transcribe your audio or video file for you and provide time indexing so that you can quickly navigate trough the relevant data. Audibletype is also working on an open-api so that others can develop and build โ€œwidgets

  • Pingback: Anonymous

  • Pingback: Anonymous

  • Pingback: Anonymous

  • Pingback: Anonymous

  • Pingback: Anonymous