Francisco Iacobelli's academic website

I am an assistant professor in the Computer Science Department at Northeastern Illinois University. I usually am at LWH (Lech Walesa Hall) 3059. Check my classes and office hours here. My email is at the bottom of the page.

Information is essential for social interactions. Most of us get a great deal of this information from web: think news, blogs, wikipedia, twitter, facebook, etc. To push the envelope of what can be done with this information, my research explores intelligent strategies for filtering and presenting information. These strategies are based on algorithms built with clear functional goals and that afford multiple distinctions based on the nature of the data retrieved. In addition, the information presented should be relevant and distinct. That is every resource should provide a unique piece of information that is potentially useful and on-point with a particular topic or event. This information should help consumers of web content get a complete report on the topics or events of their interest. In addition, I am interested in exploring techinques of computational linguistics that can shed light on the underlying mechanisms of social interactions, such as political debate.

I got my undergraduate degree in Systems Engineer and Informatics at Universidad Diego Portales in Santiago, Chile . I got a Ph.D. in Computer Science at the Infolab, in the Computer Science department of Northwestern University; a Masters in Computer Science at DePaul University with a concentration in AI. Other interests include: Computational linguistics and enhancing social interactions with technology that can help make a difference, such as intelligent tutoring systems and other aspects of AI applied to the education of children. In the past I worked developing Virtual Peers for minority children with Justine Cassell at the ArticuLab.

Here's my CV

More about my research and publications by clicking the tabs above.

Current Research

In every interaction there is information flow. This information comes from many sources; the web being one of the main ones. To support the information flow in these interactions, I build and evaluate computable proxies to retrieve and present useful information to users. I strive to present information that is purposeful and distinct with respect to the information the user cares about.

Purposeful retrieval and presentation refer to information retrieval mechanisms that work with functionally clear goals and make those goals visible to users at many levels. Distinct information, in terms of web results, has to do with the fact that every result displayed should provide a unique and potentially valuable piece of information. This is a departure from traditional models of information retrieval based on similarity, that invariably present "more of the same".

To support distinct and purposeful information at web scale I am also interested in exploring (a) scalable methods for information filtering and retrieval; (b) development of frictionless applications, that is, they gather and present information with minimal interruptions to the user; (c) strategies for proactively selecting topics of interest; and (d) social media as a way to provide feedback and improve the information presented.

To create these applications I start from observations about real world interactions and, based on sound social and cognitive science, I apply novel as well as proven engineering principles. I use NLP, traditional IR techniques, machine learning techniques, computational models of human strategies for information retrieval and I borrow ideas from social media. While I build these applications, I research innovative ways in which computers can provide relevant, timely and distinct and purposeful information with respect to content that is of interest to users and can support social interaction. I borrow from the literature on social sciences, cognitive science and HCI to both design and evaluate these applications; Paying special attention to the way in which they interact with users and the way in which users experience them.

In sum, I explore social interactions and ways in which technology can augment information to favor learning. This, in turn, may conduce to richer social interactions. Check my interview on the topic of new media on the French newspaper Le Monde.

Projects:

  • Tell Me More: An online news reader that provides relevant details from other news sources. It has been featured in The New Scientist , Le Monde, and in the french technology magazine L'Atelier
  • MakeMyPage: An aggregator that strives to retrieve diverse and on-point information about popular topics.

Publications

Alastair Gill and Francisco Iacobelli. (2014) . In proceedings of the 14th meeting of the Society for Text and Discourse. Chicago.

Joseph Chaney, Andrew Fiss, Francisco Iacobelli and Paul E. Carey Jr. (2014) . In GPS for Graduate School: Students Share Their Stories. Mark J.T. Smith, Mary M. Browne (Eds.)Amazon

Francisco Iacobelli and Aron Culotta. (2013). in Proceedings of the ICWSM 2013, in the Workshop on Personality Classification.[PDF]

Francisco Iacobelli, Nathan Nichols, Larry Birnbaum and Kristian Hammond. (2012). in Human-Computer Interaction: The Agency Perspective (M. Zacarias and J. Valente de Oliveira, eds.), Springer.

Francisco Iacobelli, Alastair Gill, Scott Nowson and Jon Oberlander (2011). In proceedings of the Workshop on Machine Learning for Affective Computing at the Affective Computing and Intelligent Interaction (ACII 2011)[PDF]

Alastair Gill, Francisco Iacobelli, Nigel Gilbert (2011). 21st meeting of the Society for Text and Discourse (ST&D 2011)[PDF]

Jessie Paterson, Christian Lange, Iqbal Akhtar, Francisco Iacobelli, Paul Anderson, and Annette Leonhard. (2010). Proceedings of the International Conference of Education, Research and Innovation (ICERI) 2010, November 2010.[abstract]

Jessie Paterson, Christian Lange, Iqbal Akhtar, Francisco Iacobelli, Paul Anderson, and Annette Leonhard. (2010). (Poster) In eAssessment Scotland (EAS2010).

Francisco Iacobelli, Nathan Nichols, Larry Birnbaum and Kristian Hammond. (2010). In Proactive Assistant Agents (PAA2010) AAAI 2010 FALL SYMPOSIUM. Nov. 2010, Arlington, VA [pdf]

Francisco Iacobelli, Kristian Hammond and Larry Birnbaum. (2010). Third workshop on Mashups, Enterprise Mashups and Lightweight composition on the web (MEM2010) at WWW2010. April 2010, Raleigh, NC [pdf]

Francisco Iacobelli, Larry Birnbaum, Kristian Hammond. (2010). . International Conference of Intelligent User Interfaces (IUI) 2010. Feb. 2010, Hong Kong.[pdf]

Francisco Iacobelli, Larry Birnbaum, Kristian Hammond. (2010). . International Conference of Intelligent User Interfaces (IUI) 2010. Feb. 2010, Hong Kong.[poster pdf]

Francisco Iacobelli, Kristian Hammond and Larry Birnbaum. (2009). 3rd International Conference of Weblogs and Social Media.May 2009, San Jose, CA [pdf]

Francisco Iacobelli, Alastair J. Gill, Scott Nowson and Jon Oberlander. (2009). 19th Annual Meeting of the Society for Text and Discourse. July 2009, Rotterdam, The Netherlands. [poster pdf]

Francisco Iacobelli and Justine Cassell. (2007). 7th International Conference on Intelligent Virtual Agents (IVA2007), Sept 2007, Paris, France. [pdf]

David Huffaker, Joseph Jorgensen, Francisco Iacobelli, Paul Tepper and Justine Cassell. (2006). In the Workshop on Analyzing Conversations in Text and Speech (ACTS) at HLT-NAACL, June, 8, New York City, NY. [pdf]

CS 200 Programming I.

This couse provides basic programming skills. At the end, students should be able to make simple programs using the Java programming language.
Click on the link above to access the whole course minus quizzes and self assessments.

CS 335 Introduction to Artificial Intelligence.

This course exposes students to seminal algorithms and concepts related to artificial intelligence. Click on the link to go to the most current iteration of this course.

CS 207 Programming II.

This couse will provide programming skills that go beyond the basics into object oriented programming as well as to provide you with standard techniques to solve common programming problems that arise in everyday programs such as sorting, working with files and the web, etc. At the end of the couse you should be able to use these techniques effectively as well as to be proficient in researching solutions to other, more complex, programming challenges.
What follows is a playlist of Youtube videos that closely aligns with the topics covered. Use the controls to navigate the playlist (it is not finalized yet)

CS 347M Mobile Application Development.

This course will provide the concepts and practice necessary to effectively develop mobile applications, in particular on the iOS platform as well as on the Android platform.
Here are the course videos for iOS and for Android.


CS 412 Web Application Development.

This couse will provide skills that go beyond the basics of creating webpages into creating applications that run on the web. At the end of the course students should be able to create interactive and sophisticated web applications that combine HTML, CSS and Javascript.

CS 419 Informatics.

This course presents students with basic, yet seminal, techniques for machine learning and data mining. At the end of the course the sutdent should be able to understand how these, and other more sophisticated techinques work and when it is appropriate to apply them. This course has considerable overlap with CS 435. (see below).
Some videos on some topics can be found here

CS 435 Expert Systems.

Although dated in name, this course presents core machine learning techniques used in decision making. At the end of this course students will understand basic categories of supervised learning decision making algorithms. This course has considerable overlap with CS419.

Latent Dirichlet Allocation (Topic Modeling)

If we assume documents are comprised of different topics, then we can assume that certain words belong to certain topics. Therefore, a document is a set of words that are more or less grouped according to their topic. For example: A sports story can have a few topics in it such as "the actual game" (score, plays, etc.), "the players" (injuries, drug testing, etc.) and "speculation" (what will happen against the next rival). For each of these topics there are words that are strongly associated with them. For example: "score" is strongly associated to "the actual game" topic, while "steroids" is probably associated to "the players" topic. However, "score" may also be associated with the "speculation" topic albeit less strongly.

Because words are associated with topics with relative "strengths" we can talk about the probabilities with which each words is associated to each topic. Similarly we can talk about the probability that a document contains each topic.

Now, When we don't know the topics a priori nor the words associated with them nor the topics contained in a document, but we know the words in each document, we need to infer the topical structure of documents. For this we need a generative model. A model that can infer topics by trying to generate the topics in the documents by iterating and assigning topics to words that occur together. LDA is one such model. Perhaps the most popular. You can find a lot more on David Blei's Topic Modeling webpage

There are several LDA implementations. Below is my take on it (because I've had issues with the others in one aspect or another).

DOWNLOAD: Just download and uncompress this Lda Software where you can run it. You must have Java 1.6 installed (OpenJDK is fine). This is a collection of three java programs. They need to be run by typing:

			 java <Name of the program> <Options>
		 
The programs are:
  • Corpus: Which creates a corpus from a directory of text files
  • LdaGibbs: which runs LDA on a corpus
  • LdaAnalysis: which can create a corpus, run LDA and save the results. It can also run from an already created corpus.
These programs work well, but interfacing with them is in its early stages, so let me know how can I make them nicer. Of course, bugs and oddities are welcome feedback.