This is to remid me that I might want to watch the whole video. Google TechTalks March 8, 2006 Jim whitehead Jim Whitehead is an Assistant Professor of Computer Science at the University of California, Santa Cruz. He has recently been developing a new degree program on computer gaming, the BS in Computer Game Engineering. Jim received his PhD in Information and Computer Science from UC Irvine, in 2000 Abstract: Almost all software contains undiscovered bugs, ones that have not yet been exposed by testing or by users. Wouldn't it be nice if there was a way to know the location of these bugs? This talk presents two approaches for predicting the location of bugs. The bug cache contains 10% of the files in a software project. |
Thursday, December 14, 2006
Predicting bugs in code changes using SCM information
Friday, November 03, 2006
Wednesday, October 18, 2006
Filtering Blog posts
I use bloglines to read weblogs. I recently added a site that posts different deals on the web every day because I was going to buy a new monitor. Anyway, I still like to see what is going on, but there are a lot of deals I'm not interested in. I'd really like to be able to filter the results.
Bloglines doesn't have any filtering capability, but I could create a proxy server to do the filtering for me. So for example: Instead of giving bloglins http://feed.com/atom.xml I would give it http://myproxy.com/blogfilter/feed.com/atom.xml. My 'blogfilter' application would make the request to feed.com, get the data, filter stuff out and return the results.
How should I filter? The simplest would be by keywords. Remove entries with these keywords in the title or body. You could have per-blog filters, too. A next step might be to use Bayesian filtering. But then you'd need an easy way to "train" the system. The proxy could add a small snippet of javascript at the end of each post. The javascript would be loaded from my site and would create a little panel with options to somehow rate the blog. It would then use this data to train the bayesian filter.
Update:
Somes services that do this already:
Articles on filtering feeds:
Bloglines doesn't have any filtering capability, but I could create a proxy server to do the filtering for me. So for example: Instead of giving bloglins http://feed.com/atom.xml I would give it http://myproxy.com/blogfilter/feed.com/atom.xml. My 'blogfilter' application would make the request to feed.com, get the data, filter stuff out and return the results.
How should I filter? The simplest would be by keywords. Remove entries with these keywords in the title or body. You could have per-blog filters, too. A next step might be to use Bayesian filtering. But then you'd need an easy way to "train" the system. The proxy could add a small snippet of javascript at the end of each post. The javascript would be loaded from my site and would create a little panel with options to somehow rate the blog. It would then use this data to train the bayesian filter.
Update:
Somes services that do this already:
Articles on filtering feeds:
Thursday, August 10, 2006
Super Artificial Intelligence
Like most programmers, I've always been interested in AI, or Artificial Intelligence. Countless books and movies have been based on machines that gain the intelligence of humans. Usually, the plot involves the machines turning against their creators.
But right now, AI is being put to other uses: machine vision, language translation, even managing restaurants.
The above tasks are accomplished quite well by humans, but we do error on occasion. To err is human, we all accept that (to some degree). But what happens when machines start to make the same mistakes. Will we accept it? I'm not sure that we will. I think that we will demand machines that don't make the same mistakes we do. What we really want is Super Artificial Intelligence; machines that are superior to us.
That doesn't bode well for the day its Humanity vs. The Machine.
But right now, AI is being put to other uses: machine vision, language translation, even managing restaurants.
The above tasks are accomplished quite well by humans, but we do error on occasion. To err is human, we all accept that (to some degree). But what happens when machines start to make the same mistakes. Will we accept it? I'm not sure that we will. I think that we will demand machines that don't make the same mistakes we do. What we really want is Super Artificial Intelligence; machines that are superior to us.
That doesn't bode well for the day its Humanity vs. The Machine.
Subscribe to:
Posts (Atom)