Friday, November 02, 2012

Datadog and JMXTrans file output

You can use JMXTrans to poll JMX counters from your JVM and write them to a file using the KeyOutWriter.  

It will create files with the format:  <metric> <value> <timestamp>.

If you want to send these to Datadog for graphing and analysis, you can use the Dogstream facility to parse the Jmxtrans output files.  By default it execpts lines like:

     <metric> <timestamp> <value>

which isn't quite what JMXTrans outputs.   To get around this create this custom parser:

parsers.py:
import calendar
from datetime import datetime
def parse_jmxtrans(logger, line):
        metric, value, timestamp = line.rstrip("\n").split("\t")
        date = datetime.fromtimestamp(float(timestamp[:10]))
        date = calendar.timegm(date.timetuple())
        value = float(value.strip())
        return (metric, date, value, {})


and configure it like this in datadog.conf:
       dogstreams: /tmp/jmx_key_out.txt:parsers:parse_jmxtrans

where you replace /tmp/jmx_key_out.txt with the path you configured in JMXTrans.



Tuesday, January 08, 2008

JBoss WebsphereMQ Failover

IBM's MQSeries Message Broker doesn't have any built-in failover support. The reccomended solution is to use IP level takeover where a standby machine monitors the main machine via a heartbeat. If the heartbeat goes away, then the failover machine does IP takeover. Your clients will need to re-connect, but that should be all.


We didn't want to do that so instead I needed to have our MDBs and other JMS client code running in JBoss to failover to a different MQ server.


The solution I chose was to have an MBean that monitors the "main" server by listening on a "monitor" queue (just a normal queue created for monitoring purposes). When a failure is detected (by a JMS exception listener) it re-populates the MQ JMS objects in JNDI.


The MDBs will automatically try and re-connect when they detect a failure. Eventually, after the JNDI is updated to the new MQ server it will reconnect properly.


Other client code (MDBs that send messages, servlets that send/receive) need to know to re-connect when there is a failure. I created a notification system so that they can re-get any objects from JNDI after they are re-populated. In addition, any temporary queues that were created will need to be re-created since they now refer to the bad MQ server.


Hopefully, I can detail this out better over the next couple of weeks.

Thursday, December 14, 2006

Predicting bugs in code changes using SCM information

This is to remid me that I might want to watch the whole video.

Google TechTalks
March 8, 2006

Jim whitehead
Jim Whitehead is an Assistant Professor of Computer Science at the University of California, Santa Cruz. He has recently been developing a new degree program on computer gaming, the BS in Computer Game Engineering. Jim received his PhD in Information and Computer Science from UC Irvine, in 2000

Abstract:
Almost all software contains undiscovered bugs, ones that have not yet been exposed by testing or by users. Wouldn't it be nice if there was a way to know the location of these bugs? This talk presents two approaches for predicting the location of bugs. The bug cache contains 10% of the files in a software project.

Wednesday, October 18, 2006

Filtering Blog posts

I use bloglines to read weblogs. I recently added a site that posts different deals on the web every day because I was going to buy a new monitor. Anyway, I still like to see what is going on, but there are a lot of deals I'm not interested in. I'd really like to be able to filter the results.

Bloglines doesn't have any filtering capability, but I could create a proxy server to do the filtering for me. So for example: Instead of giving bloglins http://feed.com/atom.xml I would give it http://myproxy.com/blogfilter/feed.com/atom.xml. My 'blogfilter' application would make the request to feed.com, get the data, filter stuff out and return the results.

How should I filter? The simplest would be by keywords. Remove entries with these keywords in the title or body. You could have per-blog filters, too. A next step might be to use Bayesian filtering. But then you'd need an easy way to "train" the system. The proxy could add a small snippet of javascript at the end of each post. The javascript would be loaded from my site and would create a little panel with options to somehow rate the blog. It would then use this data to train the bayesian filter.

Update:
Somes services that do this already:
Articles on filtering feeds:



Thursday, August 10, 2006

Super Artificial Intelligence

Like most programmers, I've always been interested in AI, or Artificial Intelligence. Countless books and movies have been based on machines that gain the intelligence of humans. Usually, the plot involves the machines turning against their creators.

But right now, AI is being put to other uses: machine vision, language translation, even managing restaurants.

The above tasks are accomplished quite well by humans, but we do error on occasion. To err is human, we all accept that (to some degree). But what happens when machines start to make the same mistakes. Will we accept it? I'm not sure that we will. I think that we will demand machines that don't make the same mistakes we do. What we really want is Super Artificial Intelligence; machines that are superior to us.

That doesn't bode well for the day its Humanity vs. The Machine.