Wednesday, August 31, 2005

Maturana Notes finished

Got the Maturana notes finished. Also got feedback on the Goodman notes by John Lee, which I am busy incorporating into the latest set of notes.

Tuesday, August 30, 2005

Scheming

Basically, finished corpus work for Johanna and preceded to teach myself Scheme using the excellent "Little Schemer" book, and brushing up on my lambda calculus by reviewing the excellent Barendregt notes. Being familiar with Haskell and a bit of LISP, Scheme is sort of a simplified Haskell or LISP. Still, good to be back in a functional frame of mind!

Sunday, August 28, 2005

Corpus work...

Almost done spell-checking and recorrecting the corpus for Senga's regrading. Over two hundred children stories categorized...and to be finally done with by the end of the month!

Saturday, August 27, 2005

Reading Notes for Johanna

Just to keep track of the recommended reading Johanna gave me:

1) Semantic Role Labelling: With the release of Propbank, semantic role labelling is all the rage right now. The real question is should our system do this, and to what extent can it already? I'll have to take a look at what ccg2sem does, but I would guess that unless Johan added WordNet features, it doesn't. The papers "The Necessity of Syntactic Parsing for Semantic Role Labeling" shows that the semantic role-labelling should be divided into two distinct tasks, pruning, which identifies possible arguments, and then matching argument candidates to roles. And as Punyakanok et al. discover, using a full parse helps mainly by identifiying the correct constituents as argument candidates. The other paper about "Semantic Argument Classification Exploiting Argument Interdependence" basically goes even further by saying that any previous semantic roles already idetnfied should be used, but this produces only a one percent increase in recall.

2) The rest of the papers are about the "story comprehension" systems, which basically (using a sample corpus from Remedia which I imagine we could get a hold of, sixty children stories and question and answer sets) just tries to identify the relevant sentence that has the "answer". Basically the systems evolved from "Deep Read" that used prune "bag-of-word" approaches, to a rule-based approach that identified different scores for different levels of "clues" (Riloff and Thelen) to an interesting system (Grois and Wilkins) that uses a word-level transformation (directed by Q-learning) to transform the question ("Who does Nils jump on the back of?") to an answer ("Nils jumps on the back of ____"). This evolution goes from a 30-40-50 percent F-measure basically. It seems like the last method is smart but not hampered by dealing at the word level - after all, could we not do the same matching on using a dependency tree or other semantic representaiton level? One could almost think of a question as an empty semantic representation and one could do a search over available semantic representations to complete the model.

3) Lastly, the "pyramid model" (Nenkova and Passonneau) is interesting as it shows that basically they are using humans to identify "semantic content units", bits of frequently occuring text that are given a weight by how often human annotators use them. They appear to be fairly stable as the size grows, which is good news for any standard. It seems like this is something else that one might just want to do at the semantic level, as Haltern and Teufel have apparently been up to. However, they do not weight theirs (like we would), nor is it clear why one would want a human to be involved with anyways if one could just straightforwardly count overlap automatically.

Friday, August 26, 2005

Post-meeting with Johanna Notes

Overall, Johanna really wants to see more work done on the NLP pipeline to produce semantic represenations. I hope I made it clear that this will be a good example of an applicaton of the framework (both philosophical and technical) that I want to work on with Henry and Andy. However, the conceptual leap is to make the various bits of the thing work as a website. Now off to get the linode server working....

As for the actual capabilities of the server, it should be able to do the following on the text:

1) Optional Morphological Preprocessing
2) Word and Sentence Detection
3) Named Entity and Date Detection
4a) Chunking
4b) CCG-parsing
4c) Dependency Grammar Parsing (based on Optimality Theory)
5a) Coreference Resolution via Syntax
5b) Coreference Resolution via Semantics
6) Temporal Annotation of Semantic Representation
7a) Propositional Semantic Representation
7b) Propositional with Thematic Roles Semantic Representation
7c) Full First-Order Logic form

It will be interesting to see what components I can get up and working by next week. Gotta get
the linode server up to host all of this ASAP, as well as the stuff from Ewan that we had on axon working again.

Thursday, August 25, 2005

Post-meeting with Henry Notes

In summary, Henry basically approves of my chapter outline and the addition of types to functionalXML, although he admits its ambitious, and he thinks that come October if I make it to the functionalXML chapter I'll have something thesis-worthy to submit. Now once I get approval from Johanna over the general outline and narrative part, and get Andy to inspect the philosophy, I'll be ready to write the thesis plan over next week.

Second, I think I've noticed an interesting aspect of functionalXML that gives it a strong case for use *in conjunction* with other programming paradigms. Wadler has just posted code snippets of Links and they've received a more or less a negative response from the web programming community at large. The main point of critique is that Links as presented is just embedding functional code in HTML, which proves to be a horrible way of doing web programming.

What a good selling point of functionalXML could be is that it is XML-compliant, and can thus be a universal format for "embedding" processes (of whatever kind, be it Javascript/AJAX, Web Services, or even Links 0.2) into XML while keeping the actual process in XML compliant. Unfortunately when using PHP/Links the code that does the work is actually non-XML stuff embedded in XML, while in AJAX/RubyonRails methodology basically generates parts of an Infoset selectively *but* you can't tell what nodes its manipulating without first viewing the javascript code. An approach that abstracted away from the actual programming language details and just said "The content of this node will be changed by a program" and specifies the type and arguments of the program (and optionally its location, such as a http URI for a WebService, or a reference to a piece of client or server side code) would actually make web design and programming much easier. I'll write this point up over the weekend with some example code inline.

In other news, IMC Scotland just got its own office near the Uni. at Forest Cafe, and I'm in charge of installing networks. Also just installed ubuntu on my laptop after doing a thorough house-keeping on ibiblio and my laptop.

As for narrative stuff goes, I went through the corpus picking out the stories that needed to be regraded, and had a great meeting with Johan Bos to help guide him with refactoring the XML representation of ccg2sem.

Wednesday, August 24, 2005

Chapter Outline for Ph.D. topic

Chapter Outline:




  1. Introduction
    The intelligent robots envisioned by researchers at the dawn of artificial intelligence have never been created by computer science, and instead what has proved wildly successful is the Web. Informal and undisciplined, the success of the Web challenges both notions of classical artificial intelligence and the analytic philosophy that motivates AI, as well as newer variations of connectionism, dynamic systems theory, and embodied intelligence. The Web is historically situated, and trends in its future development are briefly sketched. There is a noticeable lack of philosophical and formal analysis of the Web, and this thesis will provide both. The main social problem the Web causes is not that of information retrieval but that of information organisation, and a novel solution to the problem in terms of narrative structuring of information is given to demonstrate the value of the philosophical and formal framework proposed.

  2. Computation and the Extended Mind
    Although this is so obvious as to be a truism, the computational theory of the mind is realized by computers, not the human brain. If we take the extended mind hypothesis seriously, then the mind is in a real sense distributed amongst not only the brain and the human body, but aspects of the environment, and so by definition the human mind can include computers, and architectures implemented on them such as the Web. Computers are best understood as an inverse reflection of the capacities of humans: computers allow humans to "off-load" tasks the brain has limited capacity for, such as arithmetic and deduction. A parallel example using the development of written language and logic is given. Computation is taken to be given by its classical mathematical definition, and computation is explored more fully from a philosophical standpoint. The Web is then compared and contrasted with traditional models of language and computation. The Web presents a whole challenge for traditional understanding, for the Web primarily for the digital communication of information, distinguishing it from the strict definition of computation and informal linguistic communication.

  3. The Web and Network Intelligence
    Artificial intelligence and the philosophy of the mind both begin from a premises that have in recent years been shown to be incorrect: intelligence emerges from the mind, which is assumed a unitary organization encased in the brain of an individual. This highly influenced artificial intelligence, which conceived that intelligence could be created by having a computer, as a unitary organization, be given the correct programs and code. However, intelligence can be conceived of as emerging from the "extended mind", defined as a dense network of interconnections between various machines. This is called "network intelligence" to contrast it with the more traditional artificial intelligence. What is traditionally conceived of as the "mind" behind intelligence is the narrative that the network produces to describe itself historically, and the narrative is not necessarily stored in any one component of the system. The Web can be taken as a primary example of network intelligence due to its definition as a "universal" network. The success of the Web is due to it being a manifestation of the extended mind that takes into account network intelligence.

  4. Information and Encoding
    The main traffic of the Web, information, is notoriously hard to analyse. We first begin with a reformulation of Brian Cantwell Smith's theory on the origin of objects in order to lay the grounds for the notion of objects and identity. His theory is extended by a notion of information. We analyse information as a two-fold phenomena consisting of "information content" (Dretske and Barwise) and as a methodology of "encoding"(Shannon).

  5. Digital Representation
    Building upon the definition of information previously given, the ideas of presentation and representation are separated. From the work of Haugeland and Goodman, a new definition of digitality is given. The notion of computation is tied to that of causation, and the ideas of syntax and and semantics are distinguished in terms of information, digitality, and computation.

  6. Principles of Web Architecture
    The architecture of the Web are explained, as given by previously developed concepts of digital representation and the principles of universality, extensibility, least power, and the network effect. Close inspection is given to the "Architecture of the World Wide Web" document produced by the W3C, and the current functioning of the Web is contrasted with the REST model.

  7. The Semantic Web as Types
    The Semantic Web is defined as a Web with machine-readable semantics. The current state of the Semantic Web is explained, and the Semantic Web is explained as giving a uniform encoding of identity and representation to information on the Web. This leads to two distinct notions of semantics: semantics as given by the allowed operations of a given computer program, and semantics as given by the information content of a given representation. The Semantic Web is then shown to be a distributed type system, giving a model-theory for the former and a way for users of the Web to formulate the latter. A XML-only solution to binding Semantic Web types to XML is demonstrated. This data is dual-typed, once with a "data type" and encoding specific to computational use, and once again with a "semantic type" and encoding specific to the informational content of the data.

  8. Web Services as Functions
    Web Services are programs that can be called over the Web, and are formally equivalent to functions. Web Services given Semantic Web typing can then be shown to be functions that compute over both semantic and data typing. Given that Web Services are functions and the Semantic Web a type system, a formal analysis of the next generation of the Web can be given: a distributed, truly universal computer.

  9. Computation on the Web: functionalXML
    If the next generation of the Web is a computer, it needs a programming language. Currently a simple XML-based programming language (entitled functionalXML) has been proposed by Henry S. Thompson. The language can be formally characterized by the lambda calculus, and then how the language can be extended to deal with Web Services and Semantic Web typing via the typed lambda calculus, and how such an extension can be realised in practice.

  10. Personalized Webs
    The question is then what type of information does the Web traffic in? The Web, as evidenced by the growth of blogs and other personalized forms of information creation and delivery, is the antithesis of the ontologies proposed by projects such as Cyc. Instead of delivering universal "common-sense" information, information is structured to be relevant to the highly personalized environment of the agent. We present a framework in which such information can be displayed in both a machine-readable manner compatible with the Semantic Web using a format entitled Web Proper Names.

  11. Narration and Cognition
    Blogs are how the narratives of human agents on the Web are extended computationally. The work on personalized ontologies is extended to deal with linguistic narratives. The cognitive development and aspects of narratives are explored, with examples from a corpus of stories generated by children being used.

  12. Computational and Narration
    A formal model of narratives, the narrative calculus, is expressed in terms of a propositional calculus that is optionally enriched by ontologies, temporal ordering, and a probabilistic weighting of its importance in the narrative.

  13. Narrative Detection
    A pipeline for the extraction of the narrative calculus is created using a series of Web Service NLP components, composed using functionalXML and storing the results as Semantic Web types stored using Web Proper Names. The results of detecting narratives are shown both using children stories and using the activity of Web users.

  14. Narrative Generation on the Web
    A reverse pipeline for the generation of natural language texts from the narrative calculus is given, again composed using functionalXML. These texts can be considered the automatic generation of "blogs" documenting the Web activity of the browsers of the Web. Since they are expressed in the informal language of everyday life, they are simpler for humans to understand that mere lists, and since these narratives are augmented with Semantic Web types and are open for extension, they offer a level of versatility unique to the Web.

  15. Conclusion
    This thesis analyzes the Web from both a philosophical and formal standpoint, explaining the challenge presented by the Web to artificial intelligence. It demonstrates the value of both the philosophical and formal framework by using the example application of personalized narrative generation for the management of Web information. The final chapter concludes by looking at the embedding of the Web in society and sketching out future avenues of research.

Saturday, August 20, 2005

Post-ESSLLI Musings

First, Hans Kamp's lecture on DRTs and temporal relations was great, he basically merged an event-calculus (i.e. of the type I use!) into DRTs via constraint logic programming. No implementation though, but the theory seems fine. The idea of C-OWL
seems also much more feasible than OWL, and had a great lunch talk with the fellow who gave a talk on Making the Semantic Web More Semantic, where he supports grounding out in some psychophysical dimensions. Still, maybe I've been reading too much Maturna recently - but how do we know even those are reliably objectives ("for humans" I might add being very important), and even then - what use would those be for most of the things such as abstractions like business transactions and very real cultural items like the Eiffel Tower. He also claimed that communication does not need to have "reference" anywhere, but the pattern of communication "itself" can determine the meaning of communication. Maybe for some...but when in doubt and in a foreign country, the gesticulate and point trick always seems to work. Good ending to ESSLLI, and I'm off again for camping this weekend.

Thursday, August 18, 2005

One-line and One-paragraph Ph.D. Thesis Proposal

I'll be honest: it's been hard to craft a Ph.D. thesis proposal that fits the following constraints:

  1. Manages to combine such diverse areas as Web architecture, narratives, and philosophy of the mind, but I think I may have just done it. Note that these three requirements are covered by my three Ph.D. thesis supervisors, Henry Thompson, Johanna Moore, and Andy Clark.

  2. Has a coherent strand of thought....the Web embodies parts of the human mind that our biological brain is not too good at, and so to use the Web we need to back-up to tools that focus on what we're good at. Just like the visual desktop is a better metaphor for data for most people than the black hole of the command prompt, a narrative (blog) is a better way to organise data than a big list.

  3. Has three gee-whiz factors "This guy is doing the philosophy of the Web", "That sure looks a lot like Scheme for Web Services in XML", and "A blog that writes itself...cool!"

  4. I already have substantial parts of it done...so it's just a matter of finishing the work I already started! And each of my advisors also has relevant interests and work in the area.



  • Title:

    The Semantics and Significance of the Web: From Information to Narrative

  • One Sentence:

    This thesis analyzes the philosophical and computational architecture of the Web, to analyse and generate personal narrative texts on the Web.

  • One Paragraph:


    This thesis analyzes the philosophical and computational architecture of the Web, and uses the resultant framework to analyze and generate ontology-rich narrative texts on the Web. The Web is treated as an extension of the human mind, one that is successful because it allows this extension to take place using universally accessible digital representations. The Web benefits from the sharing of information with a universal network, yet the most useful information being that which is easily accessible and personalized. This "network intelligence" shown to be diametrically opposed to both the classical and embodied traditions in artificial intelligence, as both assume a single cognitive agent, while network intelligence takes into account how intelligence is developed through the flow of information through a Web of connections without sacrificing the embodied and personalized nature of all information. The Web is characterised as a universal information space, and a philosophically rigorous model of information, embodiment, encoding, and representation on the Web is developed. The next stage of Web development as presented by the Semantic Web and Web Services is then conceived as the transformation from the Web from a universal information space to a universal computation space, since the Semantic Web can be considered a particular kind of typing and Web Services as functions that operate over those types. To return from abstraction to grounding, the problem of organising information on the Web is difficult, and one of the earliest solutions developed by humans for organising their personal information is the narrative. We use the Web framework developed earlier to construct a program that detects and analyses narrative texts on the Web through the detection of both narrative structure and the use of ontologies. This program consists of a complex Web Service-based NLP pipeline for analysis and generation of narratives, based on an empirical study done with children's stories. This program allows the automatic detection of narrative texts in free text, and can therefore be used to automatically generate a Web-based personal narratives from text gathered by users in the Web.

Wednesday, August 10, 2005

Autopoesis

Also just finished "Autopoesis" by Maturna and Varela that Henry recommended I read earlier, reading in between ESSLLI classes. Overall, the book's project is impossible - trying to describe a world using language without a third-person "objective" perspective, but a lot of ideas - including ideas of invariance and embodiment, are crucially here. Interesting enough, their idea of an autopoetic system as a system where all the components have no "input" or "output", but function as a co-ordinated whole (although the individual components may have allopoetic input and outputs, these do not interface with the system as a whole) in order to maintain and recreate the system. Seems like the best definition of "life" I've heard yet. However, it does have difficulties. For example, how can one realistically draw the line between invariant components of the system and those that are not? For example, I eat food. It maintains me, and is part of my life. However, I am not eating all the time, but only some of the time. Yet I need food to prevent dying, and must have physical contact with it. To what extent does is physical connection needed for something to count as a component of an autopoetic system? Three times a day as with food? How about the Web? How much do I have to surf for the Web to it to be part of my autopoetic existence? And how long till I become a component of the autopoetic existence of the Web? Again, I think these things are more fuzzy than Maturna makes them out to be...

And at ESSLLI, Cem Bozsahin gave an excellent intro to CCG, and the summarization class has been great! Took a brief trip over the weekend...



Enjoying surprisingly good Scottish weather near Elgol

Tuesday, August 09, 2005

ESSLLI 2005 and Ph.D. Thoughts.

For the next two weeks I'm going to be at ESSLLI, the European Summer School for Logic, Language, and Information, which is Heriot-Watt. It's a good hour trip there and back everyday, and it lasts from 9 in the morning to at least 6:30, so I'm not sure how much I'm going to be able to do in my spare time. The classes look interesting, I should be taking classes on summarisation, information retrieval, metaphor processing, as well as reasoning on the Semantic Web and NLP for Multimedia applications, as well as attending a workshop on the logical foundations of grammar. There should also be great talks on statistics and NLP as well as "Why the Semantic Web isn't Semantic".

However, while I took a brief trip in the highlands to unwind post-G8, I started to really hash over ideas about being the Ph.D. thesis together. Basically, I have an all-star team of advisors, and now have to pull together my three favorite topics in philosophy, Web architecture, and NLP together. I'm thinking that I could write up how the web effects our ideas on information, representation, and digitality (thus putting the subject matter of the Web on firm philosophical ground with regards to AI), and then proceed to use that terminology to formalize a theory of the Web (Semantic Web/XML Schema as types, Web Services as functions) and then finally make the whole thing concrete by using a combination of Henry Thompson's lambdaXML (It appears that Wadler's Links just won't happen in time, and it's a much more ambitious project than the rather simple one I need!) and the GridNLP stuff I used to work on with Ewan Klein to make a program that analyzes web caches and searches, creating narratives out of them using techniques pioneered by Johanna Moore. That's no small amount of stuff - but I already have bits and pieces of it done, and most of it just needs to be tied together.