Graphical representation of a sample K Nearest Neighbour classifier

Today i read the first lines of a promising tutorial about machine learning. It starts introducing the subject and showing the first javascript example of classifier. I’ve never read anything about that, so my knowledge is still very limited (yes, I stopped before the end of the first lesson because I wanted to implement this). For technical details, I send you to the tutorial.

Anyway, what I’ve understood so far is that an important part of machine learning consists in classifiers. Classifiers are algorithm meant for “recognizing” (classify) objects by some of their features. Actually you feed them with a known dataset (already classified) and hope they will be able to classify any new object you throw in them by just recognizing their features and comparing them with the known features in the dataset. I won’t go technical on this (I can’t yet), I suggest you to read the linked tutorial if you’re really interested.

There are plenty of classifiers, but one of the simplest is the K Nearest Neighbour Classifier.It works just by representing the n features (which must be numeric) on the axis of a n-dimensional space. When you give it an element to classify, it finds the K nearest known elements (of the given dataset) and finds which class occurs most of the times in this K elements. The element is then assumed to belong to this class, and thus is classified.

I wanted to try to implement my own version of this simple algorithm, so I wrote this little Javascript app which takes some input points (x and y are the numeric features), each of them with a color (the known class), and then generates new random elements (x and y) and classifies them with the K Nearest Neighbour (with K=5 for now, but i’m changing that often), coloring the points (putting them in a class). The result is, after some time, that the entire space is colored in a way dependent by the dataset (your original input). That’s not much useful, but it’s surely funny. At least it has been for me.

Notice that this implementation uses as dataset of the current step every element in the canvas, even those generated randomly and then classified in previous steps. This is of course not very clever for this kind of classifier, but it makes the result less predictable, and fits better my purposes (I have no purposes).

Here’s the link, and here’s a screenshot:
knearest_example

Why unencrypted wireless network are bad

A lot of times, speaking to people about home wifi and security, i hear something like this: “Why should I encrypt my home wireless network, I don’t mind sharing my internet connection, I’m ok with it as long as I don’t need my whole bandwidth”. And I can’t tell that’s wrong at all.

But what these people don’t think, which is not obvious for those who doesn’t know how it works, is the whole question of security: when you connect to a wifi network, you usually exchange data over the air with an access point, which is the only device supposed to receive and process it.

But when you send data through your wireless card, you’re just broadcasting it over the air, and every device close enough to your computer which is capable of receiving wireless data could potentially receive it.

As you could imagine, when you establish a connection through an access point, you’re sending data but you’re also receiving data from it. That means that every wireless card has the capability of receiving data of a wireless connection. Obvious.

But then, how comes that I can surf the Internet without seeing the traffic of all the other people of the network? Of course, the wireless protocol grants that only the data meant to be sent to my wireless card will be processed, ignoring all the packets sent to other’s. That’s clear and reasonable.

But what you should ask now is: who is granting that the wireless protocol is working that way?

Nice question. It’s the operating system. It speaks directly to the hardware, which is meant to receive bits correctly and not much more. The card then turns the bytes received to the operating system, which interprets them and decides what to do. Usually operating systems are built to avoid people messing up with the hardware itself, which is usually a good idea, so the protocols are deeply integrated with them. Here is when you come and say: “The hardware is mine and I want to do what I want with it”. That’s why you should use Linux. Linux is free, and so are you when you use it over your computer hardware. Linux is programmed to work as you expect it would, but it always lets you do what you want if you know how (and have the right permissions).

That means yes, you can actually receive data packets the other people are broadcasting, as long as you’re close enough with the source of the wireless signal and you have a wireless card capable of doing that (most of the cards will work, but some cards which are hardware-blocking this possibility exist).

Now that you understood that you can receive other’s data, let’s go back to the encryption problem. Of course, since you’re broadcasting data over the air, you can always be received by someone else’s wireless card. But if you’re connected to a unencrypted network, you are also sending data in clear. That means you’re broadcasting to potentially anyone everything you’re sending to the network, and they can read it in clear. The funny thing is that, when you’re only receiving data, no one can notice it since you’re not transmitting anything yourself. You don’t even need to be connected to the same network, you just have to listen on the right “channel”.

Fortunately, you’re not beaten yet in this privacy war. If you’re connected to an unencrypted wireless network but you’re using an encrypted service, such as https, you’re still transmitting data in clear, but that data is https data, which has already been encrypted by the https protocol, which you and the endpoint are using (and want to use). So when the malicious listener receives the wireless data, he can see it, but he’ll found it’s encrypted data.

You can now understand that all the unencrypted traffic sent through an unencrypted connection can be intercepted and read in clear by a potential attacker.

Some examples of unencrypted services that transmits unencrypted data are ftp, pop3, smtp, http. If you use one of these protocol over an unencrypted connection you can be easily read by someone other’s computer nearby.

I’ve created a little bash script which looks for an unencrypted wireless network and starts listening for packets sent through it. Then you can use your preferred packet sniffer software to display and analyze the packets received from your wireless card.

Here’s the code:

 

#!/bin/bash

dev=wlan0

echo "Setting $dev to managed mode"
sudo rfkill unblock wifi
sudo ifconfig $dev down; sudo iwconfig $dev mode managed
sudo ifconfig $dev up

channel=""
ssid=""
unencryptedchannel=""
sleep 2
echo "Searching for unsecured network channels"
for word in `sudo iwlist $dev scan`;
do
if [ "$word" == "Cell" ]; then
channel=""
ssid=""
fi
buf=`echo "$word" | grep "Channel:" | cut -d':' -f 2`
#echo "Buf: $buf"
if [ "$buf" != "" ]; then
echo "I've got a channel! The channel $buf"
channel="$buf"
fi

essid=`echo "$word" | grep "ESSID:" | cut -d':' -f 2`
if [ "$essid" != "" ]; then
echo "The essid is $essid"
ssid="$essid"
if [ "$unencryptedchannel" != "" ]; then
break
fi
fi

enc=`echo "$word" | grep "key:" | cut -d':' -f 2`
#echo "enc: $enc"
if [ "$enc" == "off" ]; then
echo "The channel $channel has no encryption!"
unencryptedchannel="$channel"
fi
done

if [ "$unencryptedchannel" == "" ]; then
echo "No unencrypted network. Quitting"
exit
fi

echo "Your channel is $unencryptedchannel, on wifi network $ssid. Proceeding with sniffing"
sleep 1

echo "Putting $dev in monitor mode"
sudo ifconfig $dev down; sudo iwconfig $dev mode monitor
sudo ifconfig $dev up
sleep 1
while [ "`iwconfig $dev | grep Monitor`" == "" ]; do
echo "Monitor mode not set, retrying"
sudo ifconfig $dev down; sudo iwconfig $dev mode monitor
sudo ifconfig $dev up
sleep 1
done

echo "Setting $dev to channel $unencryptedchannel"
sudo iwconfig $dev channel $unencryptedchannel
echo "Interface $dev ready for sniffing."

This code is not intended to be used for malicious purposes, this is just a proof-of-concept to understand which are the real risks of transmitting through an unencrypted network. Use this code to try and intercept your own traffic while sending emails with smtp, or retrieving them with pop3, or connecting to your ftp host. You’ll better understand what I explained in this post.

It’s even possible to intercept images you are seeing on your browser through http, yes, like Facebook photos and similar. That’s because Facebook by default avoids using https after the login due to its bandwidth cost.

I should conclude encouraging you to encrypt your wireless connection if you want to protect your privacy, or at least to be aware of the risks you can take by using it with unencrypted services.

Installing EPSON Stylus SX235W on Linux Ubuntu/Mint

Since I’ve had some difficulties finding on the internet support for this printer model, I decided to write this short post about it.

I had troubles installing this printer on a Linux Mint 14, but I guess it’s the same on various Debian based distributions.

To get to the solution, I followed some instruction of this post.

The first thing to do is downloading the right package from the EPSON website, here searching for the model sx235 (without ‘W’).

Then search for the package for the ‘Linux’ operating system and with the name ‘ESC/P Driver (full feature)’.

Download it and install it.

Installing this package (epson-inkjet-printer-201108w_1.0.0-1lsb3.2_i386.deb) i got a dependency error about lsb >=  3.2, even if I had the 4.0 version currently installed.

I tried then to force installing with

sudo dpkg --force-depends -i epson-inkjet-printer-201108w_1.0.0-1lsb3.2_i386.deb

Then going to the system “Add printer” dialog, select Network Printer while connected to the same network of the printer, and you should see the printer showing up. Just select it and then the system will do the rest. It should download the drivers and set up the printer, which should be working fine.

Now the package manager should be complaining about the missing dependencies, but you can resolve the error with a

apt-get -f install

which should just remove the previously installed epson package. 

Even if it’s now removed, and the package manager is now working again, the printer should still work fine, and you can still print over the network, so everything is ok 🙂

This worked for me, I hope it helps.

SecretChat

In these months, I attended  course of Computability and Complexity, ending with some concepts of cryptography.

It’s been really interesting, so I wanted to experiment something about it.

I created a little web page using Angular.JS and just a little PHP, to implement some kind of “secure chat”, which uses a given passphrase to encrypt every message directly on the client, and then stores the messages in an online database. This chat can host multiple users with the same passphrase, which will be able to see each other messages, doing all the encryption/decription on the client side.

This means that all the traffic going through the internet has already been encrypted, so everything is stored in the database must be decrypted with the key to be read. In theory, even accessing to the database wouldn’t give any information about the messages sent through the chat.

I really don’t know exactly how commercial messaging systems do work, but I always suspected that if the provider of the service wanted to, it would be able to read the messages going through its servers, because the ‘secure’ connection (if any) is established between the server and the clients, and not between the very two endpoints of the communication.

In the system I implemented, instead, the server makes only the ‘buffering’ of the data, it never gets in touch with the encryption keys, so that it can’t read the messages it saves to the database.

I don’t really believe commercial systems work exactly that way, but the doubt was enough to get me developing this little project 🙂

The software uses the encryption standard AES, implemented in javascript with Crypto-JS.

The AES encryption is a simmetric key encryption, which means that every user share a secret key and then use it to encrypt the data sent through an unsecure channel.

Here’s the link to the chat, if you want to be sure i’m not reading your messages in the database, you should inspect the javascript of the page, understanding that the encryption is made with your passphrase, in the client side of the application. This should be enough to understand I can’t read anything without the key.

Here’s the (still in development) link.

Genetic Programming (2) – Santa Claus

My experiments with genetic programming are going on. I’m trying to focus on something more useful, so I’ve made a little page which uses gp to search the shortest path to connect a number of points, randomly generated, or specified by the user.

It is possible to modify the parameters of the genetic algorithm, editing the JSON object in the textarea.

The page is here.

Genetic Programming – Just for fun

I started reading something about genetic programming, and i must say it’s really interesting. I decided to try to implement a genetic algorithm just to go deeper and better understand how it works, so i developed a little gp evolving enviroment.

Everything in that page is really messed up, and it doesn’t completely work for now: anyway, if you want to give it a try, it’s here.

Building a Raspberry PI Car Radio

Today i’m going to write about my new project, a Raspberry PI powered radio car. For now it can provide only basic functionalities, but it can be easily improved by upgrading the software, on which i didn’t focus very much so far.

In this post i will explain how I configured the PI to accomplish the task.

What I used in this project?

  • a Raspberry PI
  • a standard 16×2 character LCD display (HDD44780 compatible)
  • usb speakers
  • a button
  • a wheel from an old mouse
  • a box

The idea

The idea is to make a car radio capable of playing music from a usb pen plugged in the PI. My radio can only shuffle the songs found, but that’s just about software, it can be improved in many ways, that is where a lot of customization may take place.

Getting the display working with the PI

The first thing to do is to understand how to drive the LCD display from the PI. This is made by connecting the display to the PI’s GPIO, and then sending the commands from the programming language, usually using a library for the HDD44780 standard.

I did this from python, but I won’t touch this part in this post because I followed a nice tutorial about this from Adafruit.

After I was able to write to the display, I wrote a little python script to print something to the display from the command line, which could be useful to write something to the lcd from a shell script later:


#!/usr/bin/python
from Adafruit_CharLCD import Adafruit_CharLCD
import sys
lcd = Adafruit_CharLCD()
lcd.begin(16,1)
lcd.clear()
lcd.message(sys.argv[1])

Playing music through USB speakers

To play music through a USB device, the PI must be configured properly. In my case, I had some operation to do before everything worked, and my speakers are still showing some problems when playing from the PI (I hear a little noise, but i didn’t search enough to tell it can’t be resolved).

Anyway, if you want to use your PI as a car radio, you probably want the USB speakers to be the default audio device for it. You can do this by editing the /etc/modprobe.d/alsabase.conf adding the following line, or modifying the relative line if it already exists with another value (which in raspbian should be the case):

options snd-usb-audio index=0

I had also another problem which seemed to be quite common, and I also had to add another line at the end of that file, to avoid an “expected delay” error which I discovered running dmesg:

options snd-usb-audio nrpacks=3

I choose the higher value which didn’t give me errors in the log. After modifying the file i restarted alsa with

sudo service alsa-utils restart

This wasn’t enough to play music through the speakers, I also had to specify the right device when i played the music with mplayer. I had to discover the name of the device running the following command:

aplay -L

which gives a list of the recognized devices, from which i selected the right one, and finally got mplayer to play some mp3s:

mplayer -ao alsa:device=front my_music_file.mp3

Usb automount and autoplay

Now that the display is working correctly and we can play music through the speakers, let’s figure out how to handle the mounting/umounting actions:

The next step is to configure the PI to automatically mount any usb drive plugged in. This can be done by using the usbmount software avaiable in the raspbian repository:

sudo apt-get install usbmount

Now, I want to autoplay music when the usb is plugged in, so I add a custom script to the default usbmount script folder /etc/usbmount/mount.d :


#!/bin/bash
python /home/pi/lcd/char/print.py "Madre foca:
Playing music.."
sudo -u pi mplayer -slave -input file=/home/pi/fifofile -ao alsa:device=front -shuffle $UM_MOUNTPOINT/* $UM_MOUNTPOINT/*/* $UM_MOUNTPOINT/*/*/*

The first command uses the python script I created before to print something to the LCD when music is starting, while the third line starts the playback.

As you can guess, the $UM_MOUNTPOINT variable contains the path to the mounted drive.

The -slave mplayer option is a very useful way to let you send commands to mplayer even if it’s backgrounded. It tells mplayer to read input commands from a file, but if we provide a fifo file instead of a normal one, mplayer will read from it only when we will write something. This allows mplayer to be controlled by writing to a file, from python, shell or whatever you prefer. To create a fifo file you should use

mkfifo /home/pi/fifofile

If you want to know more on the -slave option of mplayer, see here.

I had to add the sudo -u pi part because it seemed that the user of usbmount wasn’t allowed to read from the fifo, I don’t know exactly why.

Finally, I also created a script for the umount event, which simply killed mplayer, and placed it in the /etc/usbmount/umount.d directory:


#!/bin/bash
python /home/pi/lcd/char/print.py "Music stopped.
I'm a super PI."
killall mplayer

Connecting the inputs: the button and the wheel

I’m not very good with electronics, so i won’t explain this part very deeply, also because that’s pretty basic.

The scheme to read the state of a button should be something like this: 20130318_214714

The mouse wheel should have three connectors, and it behave like two button together, with the central pin in common. The only thing to know about it is that the two “buttons” are switched on and off while you turn the wheel, but one of them is triggered a little before the other, giving you information about the direction.

I decided to use a wheel (or two button) and an additional button to provide basic input for a “menu” structured OS for the radio, which is the minimal input for a fair usability.

Reacting to the inputs

The last thing to do is to react to the input of the user, writing the real “operating system” for our radio. I wrote this in python, and the interface is really minimal, providing the wheel to control the volume and the button to skip to the next song.

As usual the code is really bad written, and it should be used only to understand how it works. I encourage you to write a real interface, you could really do anything with that.

The script (bad written of course) is this:

import RPi.GPIO as GPIO
import time
import os
from Adafruit_CharLCD import Adafruit_CharLCD

lcd = Adafruit_CharLCD()
lcd.begin(16,1)
lcd.clear()
lcd.message("Hello!nI'm the SuperPI.")

def left():
    lcd.clear()
    lcd.message("Volume up")
    os.system("amixer set PCM 5+");
    print "left"

def right():
    lcd.clear()    
    lcd.message("Volume down")
    os.system("amixer set PCM 5-");
    print "right"

def pressed():
    print "pressed"
    lcd.clear()
    lcd.message("Skipping tonnext song")
    f = open('/home/pi/fifofile','w')
    f.write('seek 100 1n')
    f.close()

GPIO.setmode(GPIO.BCM)
GPIO.setup(4, GPIO.IN)
GPIO.setup(18,GPIO.IN)
GPIO.setup(2,GPIO.IN)

last_a = False
last_b = False
last_c = True
paired = True
i=0
while True:
    a = GPIO.input(4)
    b = GPIO.input(18)
    c = GPIO.input(2)
    #print "4: "+str(a)+ " 18: " + str(b)
    if c == False and c != last_c:
        pressed()

    if a != b and paired:
        if a != last_a:
            left()
        else:
            right()
        paired = False
    else:
        paired = True
    i+=1
    last_a = a
    last_b = b
    last_c = c
    time.sleep(0.01)

Note that it uses the previous created fifo to send to mplayer the command to skip to the next song.

Assembling

I’m sure that anyone can assemble this project better than I did. And to prove it i’ll show a picture:

The video

Here’s a little video (in italian) in which I show the (few) functionalities of the radio:

I’m thinking, when I have some spare time, to extend its functions by attaching a webcam and trying to do something more complex, given the wide range of possibilities the PI can offer.

Raspberry PI as FM AUX input for your car radio

When i read about turning your Raspberry PI into a FM transmitter i was really excited. That’s real hacking!
I decided to try to use this hack to provide an aux input for car radio which doesn’t have it, and i succeeded (quite well).

I downloaded the FM transmitter C program from the link above and figured out a way to drive my usb audio card mic input to the program, to be able to broadcast that in FM: wonderful.

The result was a short range Raspberry PI powered FM transmitter, and i was able to tune my radio to the right frequency (in this case 100.00 MHz) and to listen for the music i was playing from my device.

 

Since there has been some interest about how I made this, I post here the little scripts I wrote for this project, even if they are not written very well, and may not be easy to understand and adapt to your needs. Here they are:

https://drive.google.com/folderview?id=0B8r7_UiGnBnUUW9zMXVpMjB1akU&usp=sharing

Raspberry PI powered Face Follower Webcam

Hi everybody, here’s my new Raspberry PI project: the face follower webcam!

When I received my first raspberry, I understood it would have been very funny to play with real world things: so i tried to make the PI react with environment, and I played a lot with speech recognition, various kind of sensors and so on. But then I suddenly realized that the real funny thing would have been to make it see. I also understood that real-time processing a webcam input would be a tough task for a device with those tiny resources.

Now i discovered that this is a tough task, but the raspberry can achieve it, as long as we keep things simple.

 

OpenCV Libraries

As soon as I started playing with webcams, I decided to look for existing libraries to parse the webcam input and to extract some information from it. I soon came across the OpenCV libraries, beautiful libraries to process images from the webcam with a lot of features (which I didn’t fully explore yet), such as face detection.

I started trying to understand if it was possible to make them work on the PI, and yes, someone had already done that, with python too, and it was as easy as a

sudo apt-get install python-opencv

After some tries, i found out that the pi was slow in processing the frames to detect faces, but only because it buffered the frames and I soon had a workaround for that.. and soon I was done: face detection on raspberry PI!

All you need to try face detection on your own is the python-opencv package and the pattern xml file haarcascade_frontalface_alt.xml, i guess you can find it easily in the Internet.

I started with a script found here and then I modified it for my purposes.

The Project

Eventually, I decided to build with my PI a motor driven webcam which could “follow” a face detected by opencv in the webcam stream. I disassembled a lot of things before finding a suitable motor but then I managed to connect it to the Raspberry GPIO (I had to play a little with electronics here, because i didn’t have a servo motor — if you need some information about this part, i’ll be happy to provide it, but i’m not so good with electronics thus the only thing I can say is that it works). Here’s a picture of the circuit:
The circuit to control the motor from the PI.

And here are some photos of the motor, to which I mounted an additional gear wheel.

The circuit to control the motor from the PI.

The circuit to control the motor from the PI.

The circuit to control the motor from the PI.

Once the motor worked, I attached it to a webcam which i plugged into the PI, and then I joined the previous linked script with some GPIO scripting to achieve the goal, here is the result:

import RPi.GPIO as GPIO
import time,sys
import cv,os
from datetime import datetime
import Image

#GPIO pins i used
OUT0 = 11
OUT1 = 13

out0 = False #!enable line: when it's false, the motor turns, when it's true, it stops 
out1 = False #!the direction the motor turns (clockwise or counter clockwise, it depends on the circuit you made)

def DetectFace(image, faceCascade):

    min_size = (20,20)
    image_scale = 2
    haar_scale = 1.1
    min_neighbors = 3
    haar_flags = 0

    # Allocate the temporary images
    grayscale = cv.CreateImage((image.width, image.height), 8, 1)
    smallImage = cv.CreateImage(
            (
                cv.Round(image.width / image_scale),
                cv.Round(image.height / image_scale)
            ), 8 ,1)

    # Convert color input image to grayscale
    cv.CvtColor(image, grayscale, cv.CV_BGR2GRAY)

    # Scale input image for faster processing
    cv.Resize(grayscale, smallImage, cv.CV_INTER_LINEAR)

    # Equalize the histogram
    cv.EqualizeHist(smallImage, smallImage)

    # Detect the faces
    faces = cv.HaarDetectObjects(
            smallImage, faceCascade, cv.CreateMemStorage(0),
            haar_scale, min_neighbors, haar_flags, min_size
        )

    # If faces are found
    if faces:
        #os.system("espeak -v it salve")
        for ((x, y, w, h), n) in faces:
            return (image_scale*(x+w/2.0)/image.width, image_scale*(y+h/2.0)/image.height)
            # the input to cv.HaarDetectObjects was resized, so scale the
            # bounding box of each face and convert it to two CvPoints
            pt1 = (int(x * image_scale), int(y * image_scale))
            pt2 = (int((x + w) * image_scale), int((y + h) * image_scale))
            cv.Rectangle(image, pt1, pt2, cv.RGB(255, 0, 0), 5, 8, 0)

    return False

def now():
    return str(datetime.now());

def init():
    GPIO.setmode(GPIO.BOARD)
    GPIO.setup(OUT0, GPIO.OUT)
    GPIO.setup(OUT1, GPIO.OUT)
    stop()

def stop(): #stops the motor and return when it's stopped
    global out0
    if out0 == False: #sleep only if it was moving
        out0 = True
        GPIO.output(OUT0,out0)
        time.sleep(1)
    else:        
        out0 = True
        GPIO.output(OUT0,out0)

def go(side, t): #turns the motor towards side, for t seconds
    print "Turning: side: " +str(side) + " time: " +str(t)
    global out0
    global out1

    out1 = side
    GPIO.output(OUT1,out1)
    out0 = False
    GPIO.output(OUT0,out0)
    time.sleep(t)
    stop()

#getting camera number arg
cam = 0
if len(sys.argv) == 2:
    cam = sys.argv[1]

#init motor
init()

capture = cv.CaptureFromCAM(int(cam))
#i had to take the resolution down to 480x320 becuase the pi gave me errors with the default (higher) resolution of the webcam
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 480)
cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 320)
#capture = cv.CaptureFromFile("test.avi")

#faceCascade = cv.Load("haarcascades/haarcascade_frontalface_default.xml")
#faceCascade = cv.Load("haarcascades/haarcascade_frontalface_alt2.xml")
faceCascade = cv.Load("/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml")
#faceCascade = cv.Load("haarcascades/haarcascade_frontalface_alt_tree.xml")

while (cv.WaitKey(500)==-1):
    print now() + ": Capturing image.."
    for i in range(5):
        cv.QueryFrame(capture) #this is the workaround to avoid frame buffering

    img = cv.QueryFrame(capture)
    hasFaces = DetectFace(img, faceCascade)
    if hasFaces == False:
        print now() + ": Face not detected."
    else:
        print now() + ": " + str(hasFaces)
        val = abs(0.5-hasFaces[0])/0.5 * 0.3
        #print "moving for " + str(val) + " secs"
        go(hasFaces[0] < 0.5, val)
    #cv.ShowImage("face detection test", image)

 

Of course I had to play with timing to make the webcam turn well, and the timings like everything strictly connected to the motor depends on your specific motor and/or circuit.

Here’s the video (in italian):