My Ninja, Please! 1.31 : Machine Learning

Imagine this problem: you are blindfolded and sat next to a person typing on a standard keyboard. You listen to the person type for 10 minutes. Click-clack, tip-tap. Your task is to learn to recognize what they are typing by the sound of fingertips striking keys. Sounds pretty tricky, but some researchers at Berkeley developed an AI system that can do just that.

How does it work? This is where things get even more impressive. The input to the system is a 10 minute sound recording of a person typing. The algorithm uses this recording to build a model of the sounds generated by certain keys being struck. Amazingly, there is almost no knowledge built into the system from the start. The initial model does not know how many keys there are or where the boundaries between keystrokes are.

Most machine learning systems begin by using data that has been labeled by people - this is called "supervised learning". The Berkeley system doesn’t need an annotated dataset. It doesn’t even need to know the alphabet being used. To think of the equivalent task being performed by a human, you have to imagine something like a blindfolded Russian listening to someone type English, although even that doesn’t go far enough, since Russian and English share some semantic and syntactic linguistic universals that an AI system isn’t privy to.

: Continue reading :

email

Post to Twitter Post to Facebook

Posted: January 31st, 2011
at 8:51pm by Koookiecrumbles


Categories: myninjaplease,computers

Comments: No comments



 

Leave a Reply