Twine is well and alive

September 16, 2008

I have been using Twine for a while now and I dig the fact that it allows me to so easily organize the contents that I am interested in from the Web. Twine users claim that it is an Interest Network on top of a Social Network. It helps in resolving the Information Overload problem, but with capabilities more than online bookmarking sites like Delicious. It actually extracts the content from webpages, auto-generates tags into different categories using NLP (Natural Language Processing) techniques and gives you a capability to connect to other Twiners like you do in Facebook.  I like the concept and have been using it thoroughly.

The entire site is built on top of the Semantic Web infrastructure and they are one of the pioneers in this realm of emerging web technologies -find me at


So the day on tutorial is over. There were many choices, many activities and many participants trying to decide on which talk to hit on before it started at 8:30 this morning. The topics on the first half of the tutorials were presented asynchronously and were equally competitive in nature. From the plethora of choices, I decided to hit the first tutorial on The Future of Social Networks: The Need for Semantics by the DERI team. However, I quickly realized that there were other interesting things going on in parallel, so I moved on to W3C Tutorial: Using Semantic Web Data: Query, Inference, and Proof by Eric Prud’hommeaux of W3C. Well, guess what? Even that was not enough. I was craving for something more, so I decided to land onto Semantically-Enabled Service Oriented Architectures by the Zepheira team – Brian Sletten and Uche Ogbuji. The first half of the day was over and were ready for lunch by noon.

For the second half which started at 1:15 PM, I sorta juggled between the two sessions again to wet my appetite. First I went to Dynamic and Agile SOA using SAWSDL by Amit Sheth and the team from Knoesis, then half way I sneaked into Semantic Resource Oriented Architectures (ROA): The Next Generation of Enterprise Services by David Wood and Brian Sletten from Zepheira. Both talks were equally intriguing and informative on what has been established in the Semantic Web space for the Enterprise.

SemTech 2008 fired off today in Fairmont San Jose with an Introduction to Semantic Technology and the State of Semantic Web. Both the speakers were elegant and packed with an au fait in the Semantic Technology subject matter. It was Dave McComb for the first topic and Ivan Herman for the latter. Dave ignited the conference with the latest on Semantic Technology and greeted the audience with the notion of Boiled Frog as an analogy to how we are being boiled in data – the massive amount of data we are drowning in. He jumped right into the topic and introduced the T-Box and A-Box theories pertaining to Semantic Web. He mentioned that these theories can be related approximately to classes and instances, but not quite. T-Box is transitive box and A-Box is assertive.

Then, he went on to explain something very descriptive that I was longing to hear an explanation on for a while. He mentioned about the components involved in creating the entire Semantic Web solution. He showed a simple diagram to the audience mentioning the Data Extraction (DE) and Entity Extraction (EE) as the source to Semantic Web data. Data extraction can be something mapped from other data sources to RDF or OWL where as Entity Extraction is to create structured data from other data sources using NLP or similar tool. Then the RDF or OWL can be stored in a Triple Store. The data in the Triple Store can be referred to using Description Logic which then can be Inferred using Rules or analyzed using complex algorithms. These set of components were shown in the diagram to help the audience visualize the complete flow. Finally, he showed the same Boiled Frog as a disclaimer to prove that no frogs were harmed during the creation of his slides.

Right after Dave McComb was done, there was a 15 minute break and then started Ivan Herman’s talk. He started off with the tools and current state of the art in the Semantic Web technology. He showed several tools available in specific areas of Semantic Web and how far we have come with the tools, open source or commercial, in the current market.

Semantic Tech Conference 2008

Twine – hot or not?

March 11, 2008

Twine? A site completely based on the Semantic Web technology implemented by Radar Networks (in San Francisco). It is a new way of organizing your personal information that enables you with an easy access and navigation. From the demo, it represented itself as a content bookmark where you can organize, share and discover any type of information you find in the Web to be parsed and logged into Twine. Unlike Facebook where the network is only limited to people, Twine can be of anything. Powered by semantic understanding, Twine automatically organizes information, learns about your interests and makes connections and recommendations. The more you use Twine, the better it understands your interests and the more useful it becomes.

So is Twine hot or not? I say it is recherché.

Check it out @

YAHOO! Pipes

October 3, 2007

This looks like an inception of the Semantic Web. It allows webpages to read data feeds from multiple sources located at any URI and adds flexibility to recombine them as a customized output.

An example that I played around with was ‘Apartment near Something’ where data from could be recombined with YAHOO Local to find apartments in a given area. Really cool.

If you have not seen YAHOO Pipes yet, you may check it out here.