Working with Twitter Streams with python

So here’s how to work with Twitter streams using python (in Windows). Note that I’ll be working on python 2.7.8 here:

  1. Install python: I’d already done this, so I can’t describe this step-by-step. But as far as I remember, it was pretty straightforward, installing it like any other application is installed after downloading the python executable from the Download section of this webpage.

  2. Install pip: pip is used to install python packages (a normally tedious process) extremely easily. Here’s how to install it:

    1. Download pip from here. If you’re working on Chrome, rightclick the “get-pip.py” link, and select “Save Link As”, as opposed to just clikcing on it. Ensure it is saved as a .py file.
    2. Just opening the installed file will cause a python script to run, and pip will be installed (along with setuptools, if setuptools has not already been installed).
    3. Most people don’t seem to need this, but I had to add the folder containing python scripts to my Path variable. To do this in Windows 8.1, go to the start menu, type in “System Environment”, clikc on “Environment Variables”, edit the “Path” field under system variables and append “;C:\Python27\Scripts” (or “;whatever\your\python\path\is\Scripts”).
    4. Now to test this, type “pip” in the command prompt terminal (Windows+R > cmd).

  3. Install tweepy: in the command prompt terminal, type “pip install tweepy”
  4. Setup Twitter account to access the live 1% stream: To access the live 1% stream, do the following (steps taken from Coursera course “Introduction to Data Science” ‘s first offering’s first assignment):

    • Create a twitter account if you do not already have one.
    • Go to https://dev.twitter.com/apps and log in with your twitter credentials.
    • Click “create an application”
    • Fill out the form and agree to the terms. Put in a dummy website if you don’t have one you want to use.
    • On the next page, scroll down and click “Create my access token”
    • Click “Create my access token.”
  5. Test things out! : Follow a few users, and wait until a few tweets are visible on your home timeline. Then run the “Hello Tweepy” program, and if it works, and you see your home timeline’s tweets, you’re all set! (Be sure to replace the consumer_key, consumer_secret, access_token and access_token_secret fields with their appropriate values first).

Improving the Setwise Stream Classification Problem

Srishti and I worked on Setwise Stream Classification Problem. Here are a few quick details.

A few of the drawbacks that we found, and the improvement strategy that we proposed to address these drawback, were as follows:

  1. Selection of min_stat parameter has not been discussed

     The original algorithm uses min_stat to determine when to classify an entity (it classifies an entity only if the more than min_stat number of points have been received exceeds min_stat number of points). However, it doesn’t mention anything about how to obtain min_stat. In our proposed method, min_stat is determined in a way so that it returns a class profile as soon as possible, but with likelihood of it having predicted the final class profile that it will predict, given all points in the entity, being beyond a certain amount (say, 80%). This is done by using the entire training data, and determining after how many points are used to obtain the fingerprint of an entity, the class profile the said entity is nearest to doesn’t change very frequently. In other words, after using how many points to determine the fingerprint can the class profile be determine with a certainty more than a certain amount.

  2. k-means was used for clustering the initial sample, which is affected by choice of initial seeds

    To reduce the dependency on seeds chosen initially, we proposed using bisected k-means

  3. The anchors remain constant throughout

    Anchors play a crucial role in determining fingerprints, which in term determines which class profile an entity is assigned to. However, in the original algorithm, anchors remain constant with time. In our algorithm, we propose updating the anchors incrementally with time, offline. While this would cause some overhead, it would make the system less prone to concept drift. This improvement would be of particular interest when parallelizing the algorithm, since the parallelized implementation of the update of anchors would reduce the overhead significantly.

  4. The problem of concept drift has not been dealt with

    Our improved implementation proposes using a distance-based method to measure concept drift, and those entities which cross the distance threshold are classified into a separate “concept drifted” class (with a possibility of being classified in one of the classes when sufficient data becomes available, although we haven’t looked into this in detail).

Our strategy for parallelizing the original algorithm is as follows:

  1. Parallelize the K-Means run on init_sample
  2. Parallelize the process of determining the closest anchor (Required only for a large number of anchors, and/or for high dimensional data)
  3. Parallelize updating the fingerprint of the appropriate entity (this is required if there are a very large number of anchors)
  4. Determine class profile that is closest to the fingerprint in question (this is required only if there are a very large number of class profiles)

And finally, here is a quick few steps on how to get OpenMPI running:

  1. Install cygwin for windows
  2. Select “openMPI”, “libopenmpi”, “libopenmpicxx1” from lib, “gdb”, “gcc-core”, “gcc-g++” from Devel, and “openmpi-debuginfo” from Debug option (1.8.2 at the time of writing)

Building a SLAM bot with a Kinect

As our project for the semester, Kapil, Tanmay and I will be building a bot that performs Simultaneous Localization and Mapping (or SLAM, in short) under Dr. J. L. Raheja at CEERI (Central Electronics Engineering Research Institute) , Pilani. Here, I’ll be writing about the difficulties we faced, what we did, a few good resources which helped us out, etc. All the code that we’ve written can be found here.

The entire SLAM project would be done using MATLAB. The first thing we decided to do was to build an obstacle avoider bot using the kinect, as a warm-up task of sorts. This would be a first step in several things: getting ourselves a working bot, controlling this bot using MATLAB, getting data from the kinect and analyzing it in real time with MATLAB, and finally, combinning all these steps (namely, analyzing images from the kinect depth sensor in real time, and using the information obtained from them to make our obstacle avoider).  Here’s how each of these steps panned out in detail:

  • Assembling the bot

    CEERI had purchased a bot from robokits.in (Streak), which could carry a laptop and kinect on it comfortably. However, the software which was to burn the code onto the microcontroller (and to be used for serial communication with it) failed to run on any of our laptops (although it worked fine on an old Windows XP 32-bit system). Thus, we decided to use an Arduino instead, and purchased separate motor drivers (the original motor drivers was integrated with the microcontroller board). We also purchased new LiPo batteries, since the original Li-Ion batteries we had received was, well, non-functional. Oh well. We now have a fully assembled, working bot😀

    A list of the parts we used is as follows:

    Component Specification Number
    Microcontroller Freeduino Mega 2560 (link) 1
    Chassis High Strength PVC alloy Unbreakable body (of Streak) 1
    Wheels Tracked wheel Big 10cm (of Streak) 4
    Motor 300 RPM, 30kgcm DC geared motor (of Streak) 4
    Motor driver 20A Dual DC Motor Driver (link) 1
    Battery Lithium Polymer 3 Cell, 11.1V, 5000mAh (link) 1
    Battery protection circuit Protection Circuit for 3 Cell Li-Po Battery (link) 1
    Battery charger Lithium Polymer Balance Charger 110-240V AC (link) 1
    RGBD sensor Microsoft Kinect for Xbox 1
  • Controlling the bot via MATLAB using an Arduino

    The obstacle avoider would be controlled using the Arduino board, via serial communication with MATLAB, which would be processing images taken from the kinect’s depth sensor to do the obstacle avoiding.Thus,  we needed to setup MATLAB to Arduino communication (which I had already worked on before, though). The code can be found here. The code requires MATLAB, the Arduino IDE and Arduino I/O for MATLAB. Note that the Serial port takes a very long time to show in the Arduino IDE, and this can be solved by following this set of instructions.

  • Obtaining data from the kinect, and processing the kinect’s images in real time

    This was fairly straightforward, thanks to MATLAB’s Image Acquisition Toolbox. All that was needed in addition to this was installing the Kinect for Windows SDK (I have v1.8 installed). The code can be found here. Of course, this is a continuous video input, and we’ll be using individual images later on to draw onto them easily.

  • Making the bot

    To make the bot itself, we used the depth image to locate where obstacles were located. We divided the screen into 3 parts, and took into consideration how many obstacles lying within a certain distance from the kinect were present in each of the 3 parts. The bot would then take the path with the least number of obstacles. The number of obstacles were counted by counting the number of centroids of each connected component. The pseudocode is as follows:

    initialize serial connection with Arduino
    setup the required pins on the Arduino
    start the kinect’s video input
    while (time elapsed < total run time required)

    acquire the depth and RGB image from the kinect
    threshold the depth image, so that only objects nearby are considered
    remove noise (by removing components whose connected area is less than a certain
    value)
    obtain the individual connected components, each representing an object
    obtain the centroid of each connected component
    divide the image into 3 parts, and make the bot take the direction which has fewest
    centroids
    end while

    Here’s an image of what the laptop placed on top of the bot shows when the bot is moving:

    Screenshot of obstacle avoider

    Screenshot of obstacle avoider

  • Problems

    The main problem that we faced was that the kinect has can’t detect objects that are closer than 80cm. Thus, if an obstacle appears in front of the bot when the bot turns, and the obstacle is closer than 80cm, the bot can’t detect it. The kinect just returns a zero value for nearby objects, and for both objects that are too close to the kinect (< 80cm away) and too far away from the kinect (> 4m away), the kinect’s depth sensor returns a zero value. There were a few possible solutions we could think of to this (we’d read 2 of these up somewhere online, each from a different site, but I don’t remember the resources, I’m afraid):

    1. Incline the kinect at an angle from a height, so that an obstacle at the foot of the bot is slightly more than 80cm away.
    2. Take the closest pixel’s depth value that is non-zero as the undefined value.
    3. Take the closest pixel’s depth value that is non-zero, and use some sort of threshold to determine whether the zero point represents an object closer than 80cm, or one further than 4m. For example, if the neighboring pixels to a connetced component are, say, on an average, 1m away, the entire connected component is likely to represent an object closer than 80cm, while they are, on an average, 3m away, the connected component likely represents a distant object (like a wall) farther than 4m away. Note that in the examples just mentioned, I’ve used the nearest neighbouring pixels’ values’ averages, since a single undefined point is unlikely.
    4. Use 2 ultrasonic sensors to get information about when an obstacle closer than 80cm to the bot exists.
  • Possible Improvements

    A few possible improvements to the obstacle avoider are:

    1. Account for the size of the object, and use are instead of centroids, so that the bot takes the path with the smallest clutter.
    2. Account for distance of objects: at present, the bot merely considers all objects within a threshhold, and ignores objects further away. However, a possible improvement would be that if there are several objects in one direction, and very few in another, the bot would take the direction of fewer obstacles (provided, of course, that there are no obstacles it has to worry about in its immediate vicinity). However, at present, the bot would just continue going in a straight line, even if that would mean more obstacles in the future.
    3. Divide the screen into more parts, so that the turning of the bot is more accurate

Now, onto SLAM!

Here are a few sub-tasks that are involved:

  • Visual Odometry We decided to use the kinect itself for odometry. For this, we would need to estimate movement about the x, y and z directions, and the rotation about the 3 axes. For this, we used the paper “Fast 6D Odometry Based on Visual Features and Depth” as reference. The first step to this would be to use a feature detector.

    Initially, we had planned to use the SIFT algorithm. For this, we tried using VLFeat. Here‘s some quick test code that we wrote to test it out (it assumes that VLFeat has been setup, as described here). Here’s what it looks like:

    Applying SIFT on 2 RGB images using VLFeat

    Applying SIFT on 2 RGB images using VLFeat

    However, SIFT is a little too slow, and it a little over a second for it to be run on an image.

    So, we tried implementing it with Matlab’s SURF functions. It seems to be much, well, cleaner, and way more efficient (taking only a little under a tenth of a second). Here‘s a MATLAB function to take 2 input images, and here’s the result:

    Applying SURF on 2 RGB images using MATLAB

    Applying SURF on 2 RGB images using MATLAB

Work Hard, plAI Harder!

Gokul and I came first for plAI this Apogee. Here are a few details:

plAI was a competition where we had to create artificially intelligent bots to collect as many resources as possible in a given environment. In addition, we had to compete against another AI bot in this task. There could be two possible ways of achieving victory:
  1. Based on the number of resources (“fish”) collected: Whichever bot collects the maximum number of fish before the time runs out would win. In the case of a tie, the bot which collected the first fish would win.
  2. Based on health: Each bot was equipped with a cannon, which could be fired to attack another bot. Getting hit by a cannon ball would cost the bot health. If one of the bots loses all its health, then it loses the game, irrespective of how many fish it has collected. Hitting land would also cost the bot/raft to lose health.
Thus, there were two aspects to the game: who can traverse the given terrain optimally and collect as many fish as possible, and an adversarial aspect: who would win in an encounter.
The bot is not given knowledge about its surrounding, but only about its immediate terrain- parameters like its current position in the map, whether any canon balls within its visibility range are flying towards it, whether there any any obstacles, enemies or fish in its immediate field of view, its current health, etc.
There were 4 primary parameters we could control: the direction of acceleration of the bot, whether or not its brakes are applied, whether or not its canon is firing and the direction at which its canon fires.
There were several other factors that we had to keep in mind, though, such as mathematical equations representing the viscosity of the water, which meant that we would always have to apply an acceleration, and the equation that gave the braking force.
Our strategy was two-fold:
  1. With regards to collecting the fish, we would make the bot traverse the terrain in a random fashion for a certain interval of time (say, 5 seconds), and if the bot remained in approximately the same region, we would then make it choose a direction at random, and continue in that direction. In case it no longer remains stuck (i.e., it was stuck in that location purely due to the random nature of traversal), then the random traversal of the terrain is resumed. In case it remains stuck however, i.e., it encounters an obstacle, it would travel parallel to that obstacle with a left arm on the wall approach, checking for cycles by keeping its starting position in mind. Thus, in a terrain with sparsely distributed obstacles, this algorithm would, in general, cause a randomized traversal of the map such that the bot doesn’t remain stuck in one location, while in the case of a map with a large number of obstacles, the bot would effectively take a left-arm-on-the-wall approach, which would be reasonably efficient in a maze-like environment, for example.
  2. With regards to firing the canon, at the start of the game, we randomly fired the canon to the diagonally opposite corner of the screen, where our opponents would likely start, hoping for some stray hits. Then, after we encountered the opponent, and they left our field of view, we would fire in the direction in which we last saw them, hoping for a few more stray hits.

There were several other minor details we had to take care of as well, such as ensuring that the bot takes the fish as soon as it sees it, irrespective of where any incoming canon balls are headed, ensuring that the bot ignore the fish if there’s an obstacle in between it and the fish, and ensuring that it ignores the enemy if there’s an obstacle between it and the enemy.

All in all, it was a ridiculously fun experience.

iStrike 2014

iStrike is a competition where a bot has to navigate the field autonomously using an overhead camera. The arena consists of a road with a boom barrier at which the bot has to stop (till the barrier opens up), a T-shape bend, and 2 zones at either end of the little dash above the T (For a clearer idea, refer the click of the arena from the overhead camera below).

Controlling the Arduino from Matlab was simple, since I had already worked on it before, when I wanted to plot distances from the bot on Matlab from data received from an ultrasonic sensor.

Here’s how our bot looked like:

IMG_20140329_135824 IMG_20140329_135835 IMG_20140329_135806

And here’s how the arena looked like from the overhead camera:

istrike_rnd1

iStrike Round 1 Arena

Unfortunately, I didn’t capture an image of how the bot looked like on the arena, or an image of how things looked like after thresholding the image.

There were a few things we had to take care of during the calibration phase:

  • Ensuring the bot is detected, by modifying the operations performed on the image slightly (altohugh we had done most of this beforehand, we needed a few finishing touches)
  • Rotating the image appropriately, so that it was in a “T” shape as initially mentioned
  • Ensure that after thresholding, stray white pixels were removed by setting an appropriate minimum number of pixels which had to be present in a component so that it was not removed by bwareaopen

We didn’t need to determine the threshold, since MATLAB, which uses Otzu’s thresholding method by default, does this brilliantly.

Our algorithm first thresholded the image, and removed the stray pixels (or groups of stray pixels) using bwareaopen. We then used the “Extrema” region property to determine the extreme points of the arena. Through this, we obtain the location of the topmost edge, the bottom-most edge, the left-most edge and the right-most edge as shown:

extrema

Now, we need to find the bottom edge of the horizontal line of the “T”. For this, we used the discarded red component,as shown:

extrema_red

where the red component was detected using something similar to red_component = 2*im(:,:,1) - im(:,:,2) - im(:,:,3);

We then detected the bot along similar lines, but for yellow instead of red, extracted its centroid, and tried to ensure that the bot’s centroid would lie within the left and right edges (with some region kept as a buffer, of course), until once it crossed the lower edge of the T’s horizontal part, when it would turn towards the direction of green.

This worked well, except for the fact that we forgot to keep in mind two key aspects:

  • Orientation of the bot: The direction the bot was facing could not be obtained with the patch of yellow. This would not, however, pose too much of a problem, provided the bot didn’t do something unexpected (such as magically take a U-turn), as in the long run, the bot would turn the other way if it turned too much towards one side.
  • “The Barrier”: The barrier is a black sheet that would be thrown across the track in front of the bot before the first (and only) turn, at which point the bot would have to stop until the barrier was removed. One thing that we didn’t realize was that The Barrier was compulsory. We had inquired, and had been informed that if we didn’t stop at the barrier, we would be penalized a few points. However, at the event, we were disqualified outright. Sigh…

All in all it was a wonderful experience, and we’re definitely participating next year!

Modulated Colour Sensors

In order to have reliable line following, we at Team Robocon realized that calibration is one of the most important (and unreliable) aspects (fresh batteries aside :p ). So, we (Anirudh and I) started working on fool-proof line sensors that work unaffected by ambient light. Here is what we planned, what we did, and the problems we faced.

What we planned

Our plan was this: the intensity of ambient light is generally a constant, and if it varies, does so really REALLY slowly ( if one keeps switching a light on and off as fast as one can, one JUST ABOUT MIGHT be able to achieve a frequency of… *gasp* 10Hz (I tried. I reached 2Hz :P). So, we planned to give an LED a sinusoidally varying voltage as its source, and, so that it doesn’t get reverse biased, the source would have a DC voltage superimposed on it so hat the net voltage never goes below 0V, the LED is always on, and only its brightness varies. The light from this LED, if in contact with white, gets reflected, and if in contact with black, absorbed. In presence of ambient light, the light sensor always detects light, and the micro-controller ends up being tricked into thinking that the particular sensor is above white, even though it may be above black. So, this reflected LED light shines onto a photo-detector/LDR (in series with a resistor), and a voltage is dropped across the combo as usual. Since the received light’s amplitude is modulated, the resistance of the LDR, and hence the net voltage dropped across it, varies. Thus, the voltage across it will be a sine wave coupled with a DC voltage. In the absence of modulated light, the sine wave will not be present in the output voltage. Thus, it is this sine wave that is the solution to our long quest for ambient light-free-happy-happy-land. To detect this sine wave, we pass the output voltage through a high-pass filter, which permits ONLY this sine wave voltage through (provided of course, that the sine wave frequency is more than the minimum frequency that the filter allows). We don’t really need a band-pass filter as ambient light, being un-modulated, has no frequency (in varying amplitude, I mean) at all, let alone frequency high enough to cross the high-pass filter and mandate the use of a band-pass filter. The output from the high-pass has to now be rectified, so that only the positive half of the sine is let through. Then, we smoothen the bumpy wave with a capacitor, and voila! If there is a modulated light shining on the photodiode / LDR, we have a “high” output the capacitor, else the output is low.

Simple?
Not really.

All this will be interfaced with a micro-controller, right? So, here comes the million dollar question: where on earth do we get a sine wave from?

There are a number of possible (surprisingly, not-so-simple) ways:

  1. Use an R-C cascade:
    What this does is basically keeps filtering the square wave. The Fourier’s series of a square wave consists of a number of frequencies of sine waves (refer to point 2 below). The RC cascade keeps filtering out these waves to end up giving us a sine wave. Here is the cascade that we modeled, along with the expected results:

    Square to sine- circuit 1

    Square to sine- circuit 1

    The wave in red shows a pure sine wave, and the one in cyan is the output of this circuit, which is fairly close.

    Here is the actual output we got at 30kHz, the frequency at which we input the square wave:

    Actual Output

    Actual Output

    which is, in fact, fairly close. Here’s a pic of how the circuit looked like:

    RC circuit pic

    RC circuit pic

  2. Use TI’s UAF42 IC:
    This method would involve converting one of Arduino’s PWM square wave outputs, or a the square wave output from a 555 timer IC into a sine wave of same frequency. Involving a single IC and 3 resistors, we thought it’d be a “cleaner” implementation-  way more manageable (read: less pesky loose contacts). Details of how to use this can be found here, and its data sheet here. The how-to paper basically says that if used by a band-pass filter, all higher order terms in the Fourier series of the square wave get filtered out, leaving a pure sine wave. If the frequency is not among the ones specified in the paper, then the software filter42 needs to be installed, downloaded, and run… on DOS-BOX.
    We tried this method, making the exact circuit given in the documentation above. However, this only made it LIKE a sine wave. We got the output as:

    UAF42 output

    UAF42 output

    which isn’t exactly a sine wave, but is close to it (sort of).  What we think should be done to improve this is cascade 2 resistors to get a higher order filter, but this defeats the very purpose: to keep the circuit simple.

  3. Use a Wien bridge Oscillator (or another of n number of oscillators that uses an op-amp):
    Another method that we had in mind, but didn’t actually try, since method 1 gave us decent results.

So, with the sine wave generation done, we used a waveform generator to input a ready-made sine wave so that we could see the output in a CRO, and analyse how it plays out. Thats when we realized something we hadnt thought of before: when the LED is given an input sine wave as voltage, if the frequency is too high, the LDR doesn’t respond fast enough. We found the lower the frequency, the better the variation in voltage across the LDR looked (the LDR being connected to a voltage source through a constant series resistance, and the LED to a sine wave+DC voltage source; the LED  shining directly onto the LDR).  We experimented a little, and realized the LDR doesn’t respond to anything above 10kHz, and responds well to a frequency below 1kHz. But we had designed circuit 1 for 30kHz. Ahhhh!!! Back to the drawing board… (Or in this case, edx’s circuits and electronics circuit simulator). Oh well, we chose a frequency of round 620Hz, and continued with the wave generator (620Hz is one of the frequencies listed in the document above). 

We had simulated the high-pass filter : diode rectifier : capacitor ‘smoothener’, and here is what we got:

LDR to output- circuit 2

LDR to output- circuit 2

Note that above, the voltage source is the voltage across the LDR.

What we observed in the circuit, though, was similar, but the output voltage was much lower ( a very sad 20mV).

Here is a video showing output across R in the above circuit:

Here is a video showing output across where the red probe is above, but without adding the capacitor:

Here is a video of the output shown by the probe, but in real life:

So, ya, its 20mV. Waaaaaaaaaayyyy too small. Even amplification isnt really helpful as any noise, if present, will be amplified too.

Hmmmm….

Telepathic Line Follower at Aavishkar 2013

A few of us from Team Robocon went to Aavishkar 2013, UIET, Punjab University in Chandigarh. There were 2 Rounds in this event, held over 2 days: 14th and 15th of September.

The first round was an elimination round, where we had to follow a blue line on a white background. Here is the track that they had specified:

Telepathic Follower Arena Specifications- Round 1

Telepathic Follower Arena Specifications- Round 1

We realized that it was miniscule (we even called them to confirm!), and as they had specified a low % of error, we decided to hard code the bot, with a simple backup code just in case. What we basically did was to slow down the bot as time passed, so that it could take tighter turns easier. Further, after a certain amount of time, it would rotate thrice at approximately 90° angles, which was the only way for the bot to take the last few turns to reach the center. Here’s a video of how our bot ran on our track, made as per the specifications provided:

 

The second round was where the “Telepathic” part of things came into the picture. A bot had to follow the given track, while another bot had to trace out the path made by the “master” bot from wireless data received from the master. We decided to use a one-way RF transmitter-emitter pair for this task. The master would send a signal via the transmitter module by using digitalWrite() on 8 of its pins to encode a signal, while the “telepathic” bot would read this signal from the RF receiver via 8 digital pins using digitalRead(). Here’s a video of a prototype we developed, where we used a remote control of sorts on the telepathic bot rather than the master directly for testing things out:

Here’s what the track was supposed to be like:

Telepathic Follower Arena Specifications- Round 1

Telepathic Follower Arena Specifications- Round 2

Though seemingly complex, it was specified that only the following blue loop had to be followed:

Telepathic Follower Arena Specifications- Round 2- path

Telepathic Follower Arena Specifications- Round 2- path

This worked extremely well in our favor, since we were using line sensors with red LEDs on them, and the blue would be the line that reflected best. Further, there were no sharp turns, which would mean that the telepathic bot could trace the path fairly easily with the instructions received via the RF receiver.

We received the shock of our lives when we reached there! We were shown the track before bot submission and it was huge! Well… At least compared to the one they had specified initially. It was over four times the size! We didn’t protest though, and loaded our back up code into the bot before submitting it. We then calibrated the bot, and started it up. To our dismay, the bot stopped after 3/4th the track was done. We realized that this was because, post calibration, all tube lights had been switched on prior to a photo shoot before the start of round 1. In our second run, with the lights all off, i.e., with lighting exactly like it had been during calibration, the bot ran (almost) perfectly. We went onto the second round! Here’s the video of round 1:

 

The second round was initially supposed to be on a different track then the first round, but they decided to use the initial track anyway. The organisers thought this would be easier, considering the first round track wasn’t coloured, but this change ended up working against us, because we had used a colour sensor with red LEDs (which means the multiple colours wouldn’t really be an issue), and the initial track involved following a blue line with no sharp turns, while the first round had 90° turns.

To make matters worse, during the round, the master bot ran out of battery (we had brought 2 rechargeable 9V batteries with us, out of which one had failed (and died totally, providing a massive voltage of 0.02V) during the testing phase, while the other drained fairly quickly, and we had to use non-rechargeable 9V batteries which drain quicker than you can say “Ouch”), while the telepathic bot went crazy (read: the RF module had a short-circuit). Here’s an image from the mayhem that ensued:

 

Moral of the story: Always be prepared for the worse (which we (almost) were, considering we had a back-up line following code inspite of being assured that the track dimensions wouldn’t be changed), and one can never, ever have enough batteries.

Making a colour detector

Recently, I decided to make a colour detector, which might come in handy if ever we need it to follow a coloured line.

The idea was simple: have 4 coloured LEDs ( red, green, yellow, blue) connected through resistors to an Arduino. Have an LDR in between them, connected to high voltage with another resistor in series. Now make the LEDs shine one at a time for an equal interval of time each. Read the voltage across the LDR, which changes based on the color which the entire setup faces. Calibrate it by setting thresholds for each colour you wish to measure, and we’re done!

Software for Accessibility for the Blind- Meet 1

Dr. Sai Jagan Mohan had emailed us last week describing something he had in mind- a project which entailed discovering programs to enable the blind, especially students to efficiently use a PC, both for academic (say, reading a text or having it read to them), as well as recreational activities (Eg: Listening to music, playing games, etc), and possibly to design a program to help them learn how to use it. I got in touch with him expressing my interest, and he asked me and a few others to meet him today. There, we discussed a number of points:

    1.  OpenSource vs. Proprietary software: We discussed our perceptions of various advantages and disadvantages of both types, which include:
        • Proprietary software is generally more popular to use and is easier.
        • OpenSource software will be available much more easily, and free of cost
        • Most good OpenSource software is meant for Linux, while most PCs come with Windows by default, and Windows is generally accepted as being easier to use for a first timer

      At this point, I proposed a solution somewhere inbetween these: To use Windows Ease of Access options, Magnifier and Narrator: Not exactly OpenSource, but already bundled with Windows anyway (i.e., effectively free). We agreed on the idea, and none of us having much experience on these features, decided to try them out for ourselves.

    2. Why such software is hardly used in India : The main reason we came up with is that there is almost no awareness about them, and very few organisations and training programmes exist for the same. Further, many students come from a fairly uneducated background, and their parents don’t know how to find such information out, nor are the teachers of a visually impaired child able to guide him/her, as they themselves have little experience with such children. The solution: Talk to NGOs, in due time and have a tie up with them, that we may be able to reach out to these children with their (and possibly the government’s) help.
    1. Vernacular Languages: India, being the amazingly diverse nation that it is, has countless languages and dialects. Therefore, a difficulty a blind child is likely to face is actually understanding what the screen reader reads, not only as almost all readers are tailored to read in Western Languages (again, something we weren’t very sure of, and needed to research), but also the accent in which the narrator narrates, which might be difficult for a non-native Englih speaker to decipher. The solution? We decided that the best way of dealing with this was to teach the child English from a young age, although how the child will be able to understand the very different accent, we couldn’t decide.
    1. The difficulties they were likely to face: The main difficulty was likely to be actually get blind students accustomed to the PC, as well as to teach them English enough to omfortably ue the PC.
    1. What the students can use it for: We had in mind 2 primary goals: Recreation (something that can be enjoyed even without the sense of vision, like music, which also has songs in almost all Indian languages, and which crosses the barriers of language), as well as Education (for example, through audio books, either in English, or in their mother tongue), and perhaps something that melds the 2 together (like an educational game, perhaps).

All in all, it was a very good start, and we all agreed that our first focus should be on finding out the most convenient software for these students, in terms of both ease of access and availability, starting with the inbuilt features in Windows.

My first line follower

During Apogee 2013, my sidee, Arjun, and I decided to participate in Track-o-Mania, an event which involved line following, with a twist.
The bot which we had in mind was a line follower which involved the following:
A PID Line following algorithm: PID stands for Proportional Integral Derivative. A very nice explanation of how PID can be used to rapidly reach a steady value is given here, and how one on how PID can be implemented in robotics is given here.
ADC: Analog to Digital conversion, we used ADC to calibrate the IR sensors of the bot in our code rather than manually adjusting potentiometers at the venue.

We didn’t know how to get the PID constants automatically using the auto tune library, nor did we know exactly how to use the Zeiglor-Nicols method.

The bot primarily comprised of motors, wheels, an L293D motor driver, an acrylic chassis, and an IR emitter-detector array which consists of 5 emitter-detector pairs.

For the second round, we had planned to put 2 IR emitter-detector pairs on either side of the bot to detect the soldiers and terrorists at different heights, an IR emitter-detector pair in front to detect a wall, a motor with a platform on which to keep the first aid kit which would rotate to drop the kit, and a few LEDs on which to keep a count of soldiers and terrorists. However, we were not able to implement these as the bot decided to trouble us with a loose contact, and we had to remove a soldered motor driver (an error that took us a massive amount of to figure out!) and plonk a bread board there instead.

Strangely enough, not a single participant had prepared for the second round! Our bot was very fast compared to the others as we had used PID, but sadly, the track had bumps in it (as opposed to a smooth track promised in the rules), and our motors didn’t have a high enough torque to deal with this…

But it was a brilliant learning experience, and it is this event that motivated me to join Robocon even more.

Follow

Get every new post delivered to your Inbox.