Fritzing: Custom Components

So I've been playing around with a few more electronic circuits over the last week or so and found myself wanting to move some of the simple logic processing off the Arduino. As an example, I wanted to implement the following logic table

InputsOutputs
ABYZ
0000
0101
1111
1011

Now if you look closely at this table then you will see that Y is the same as A, while Z is simply A or B. Now this would be simple to implement using an Arduino, in fact you would just need the following sketch.
void setup() {                
  pinMode(11, OUTPUT); // output Y
  pinMode(12, OUTPUT); // output Z
  
  pinMode(8, INPUT); // input A
  pinMode(9, INPUT); // input B  
}

void loop() {
  digitalWrite(11, digitalRead(8));
  digitalWrite(12, digitalRead(8) | digitalread(9));
}
This could be simplified slightly by not using the Arduino to set Y and just using A directly, but it would still require three pins to read A and B and generate Z. Given the limited number of pins on an Arduino this seems wasteful. Now most logic gates can be built from a combination of transistors, but again they can quickly get complicated, especially if you need more than one logical operation. Fortunately there are a number of cheap integrated circuits that you can buy that implement the standard logical operations (and some not so common). After a quick trip to Maplin on Thursday, 99 pence got me a single chip with four 2-input OR gates: the 74HCT32N from NXP. It was simple to add it to the breadboard and wire it into my circuit instead of using the Arduino to compute the OR.

The only problem arose when I went to record my circuit using Fritzing. Clearly Fritzing can't support every single component in the world, and it didn't have a specific component to represent the 74HCT32N. It does, however, have a number of generic parts which are easy to customize. One of these allows you to represent any integrated circuit by specifying the number of pins etc. The problem is that this generates a breadboard view where the chip is simply labelled IC and a schematic view that just labels the pins on the chip and tells you nothing about the internal workings. Now for a complex chip this makes sense but for something as simple as OR gates it would be helpful if the schematic view included them.
The left hand chip is how Fritzing represented the 74HCT32N when I used the generic IC part, the chip on the right is how I wanted it to look. Now given this is a screenshot I obviously succeeded, and the rest of this post will explain how.

Now you can create parts from scratch, but after an hour or so of trying and getting nowhere, I hit upon a simpler solution.

The first step is to use the parts editor (ignoring the warning about it being bug ridden) to label the pins and properly name the component etc. Once you are happy with those values click to save it as a new part.

The new part will be listed in the "MINE" parts bin. Simply right-click on it and choose "Export Part". This will give you a file with a fzpz extension. This is just a zip file which you need to unpack. You should find that you now have five files: a fzp file which describes the part and four SVG image files for the icon, breadboard, schematic and PCB views.

You can now go ahead and edit the SVG files. Be careful not to edit or alter any of the connectors in the images but otherwise it's safe to make changes. If you are wanting to draw logic gates inside a chip then you can simply grab the appropriate images from Wikipedia which makes life really easy as all you need to add are the connections between the gates and the pins. As well as improving the schematic view I also edited the icon and breadboard views: the breadboard view now gives the full part number and the icon shows OR instead of IC. Note that to avoid any problems try saving to plain SVG rather than any application specific extension.

Once you are happy with the images go back to the part editor and replace the images with the new versions (even though I didn't edit the PCB image I replaced it anyway as a bug in the part editor means it seems to get lost if you don't). Once you are happy you can export the part again if you want to keep a backup copy or share it with other Fritzing users.

I'm going to make any custom components I produce available to anyone who wants them. For now there is just this single chip but they will all appear here.

:), :-), :o), or :]

I've been doing a lot of different things at work recently but one of them got me thinking about emoticons, or as I think most people call them, smileys.

Smileys (as I'll refer to them throughout) often convey emotive opinions. For example, if you have just been given a gift you might use :) in an e-mail or tweet saying thank you. On the other hand if your flight has been delayed you might well use :( to show your displeasure. It's fairly easy to build two lists of smileys, one positive and one negative, containing all the possible variations of each smiley. These lists can then be used as one feature when trying to classify text and such lists can be found in most opinion mining systems which attempt to label a piece of text as positive, negative or neutral.

It turns out though, that in some cases a sad smiley can actually have a positive meaning. This came to my attention when participating in the profiling task of RepLab 2012. The motivation behind the profiling task is brand management. For example, imagine you are the PR company responsible for a company like Apple. In an ideal world you would read every document ever written about the company and try and address any negative opinions you discover. Of course we don't live in an ideal world and the profiling task aimed to encourage development of systems that could do this task automatically. The task used tweets as the documents, and the aim was to first determine if a tweet was relevant to a given entity (i.e. is it talking about Apple the computer manufacturer rather than the fruit or record label) and then to determine it's polarity, i.e. does the tweet have positive or negative implications for the company's reputation. On the face of it, you would think that the second step could be performed by a standard opinion mining system. Unfortunately I don't think that is necessarily the case.

The example given in the task description makes it quite clear that a sad sounding tweet can actually have a positive effect on a brand: R.I.P. Michael Jackson. We'll miss you. I could easily imagine this tweet being extended with the addition of a sad smiley. So a sad smiley could easily occur in both negative and positive tweets.

As I was intending to use a machine learning algorithm to build a classifier to learn polarity this dual use of smileys (and other words in general) didn't bother me too much as if I could find enough training data then hopefully the algorithm would sort out the contradictions. The problem was that I didn't have that much training data (400 tweets for each of 6 entities). Unfortunately there are many ways to write the same smiley and this variation means that the algorithm might see each version just once and would therefore be unable to draw any strong conclusions from it's presence.

The solution of course is to normalize each smiley to a given form. So for example, I would normalize all the smileys in the title of this post to :) and then feed that to the machine learning algorithm. In GATE this was easy to achieve using a gazetteer with an additional feature to store the normalized version. The gazetteer I've built covers all of the western style (i.e. viewed from the side) smileys I could find as well as the relevant Unicode symbols. To save anyone else the hassle of having to build such a gazetteer I've made it available for anyone who is interested (note when loading it into GATE use a space as the feature separator).

Of course, after the work of assembling the gazetteer it didn't actually make any difference to the performance of my polarity classifier. It turns out that in all the training data I had there weren't many smileys! Given that I'm going to be doing more work in this area over the coming months, I'm hoping that it will eventually turn out to be useful.

This does all lead to a question though -- while there are lots of ways of writing the same smiley do most people use the simplest form, i.e. :) instead of any of the three character versions?

An Arduino Powered (Scale) Speed Trap

After a break of around two decades I've recently started building a model railway. One of the issues I've faced is trying to work out how fast I should be running the trains so that their speed reflects reality given the scale at which they are modelled. I'm guessing the details won't interest everyone reading this post but if you are interested then I've blogged about this from the model railway side on one of my other blogs. Suffice it to say that what I needed was to be able to measure the time it took for a model locomotive to travel a certain distance. Now I could have used a ruler and a stop watch but this would never be very accurate. So being me I turned to a more computer oriented solution: an Arduino powered speed trap!

The brief I set myself was simple:
  • two switches to measure the time taken to travel a given distance
  • a green LED to signify the train was travelling below a given speed limit
  • a red LED to signify the speed limit was being broken
  • full speed details passed back to the PC for display
The main question was what form of switches should I use. Physical switches were out as I would never be able to accurately place them on the track in a way that any locomotive would be able to trigger them without incident. A reed switch would be easy to use and to hide on the layout, but would mean adding a magnet to each locomotive which seemed a bit daft. An infrared beam across the track would also work, but hiding it on the layout would be difficult. In the end the only sensible idea I could come up with was using a light dependent resistor (LDR) and watching for a sudden change in resistance as the light was blocked by the moving locomotive.

So on my way home on Thursday I called in at my local Maplin store and bought two of the smallest and flattest LDRs they had (specifically the 1.8k-4.5k version).

Now while I knew that the more light you shine on an LDR the less resistance it has I wasn't sure of the best way of making use of this information in conjunction with the Arduino. Fortunately the web is awash with information and tutorials and I quickly came across the solution.

So with my two LDRs, two LEDs and a bunch of resistors I knocked together the following (note that both views were generated at the same time using Fritzing, I really am impressed by this application).


As you can probably see this is a little more complicated than it needs to be as it uses two resistors for each LED, but this was the best I could manage with the resistors I had.

Of course the hardware is only half of the solution. Without appropriate microcode the Arduino isn't going to do anything useful. Fortunately I'm better at writing software than I am at designing circuits so this half was easier.

The code is (fairly) straightforward. Essentially it's a state machine that (ignoring invalid inputs) follows the following steps:
  1. wait until sensor 1 is triggered
  2. when sensor 1 is triggered record the time
  3. wait until sensor 2 is triggered
  4. determine the time difference between the two sensors being triggered and use this to calculate the speed of the locomotive
  5. wait until both sensors have returned to normal then return to step 1
This is easy to implement and the full Arduino sketch is as follows:

/**
 * ScaleSpeed
 * Copyright (c) Mark A. Greenwood, 2012
 * This work is licensed under the Creative Commons
 * Attribution-NonCommercial-ShareAlike 3.0 Unported License.
 * To view a copy of this license, visit
 * http://creativecommons.org/licenses/by-nc-sa/3.0/. 
 **/

//keep the sketch size down by only compiling debug code into the
//binary when debugging is actually turned on
#define DEBUG 0

//all possible states of the state machine
const byte TRACK_SECTION_CLEAR = 0;
const byte ENGINE_ENTERING_SECTION = 1;
const byte ENGINE_LEAVING_SECTION = 2;

//the current state machine state
byte state = TRACK_SECTION_CLEAR;

//the analog pins used for each sensor
const byte SENSOR_1 = 0;
const byte SENSOR_2 = 1;

//the digital pins for the signalling LEDs
const byte GREEN_LIGHT = 13;
const byte RED_LIGHT = 12;

//the threshold values for each sensor
int sensor1 = 1024;
int sensor2 = 1024;

//intermediate steps to calcualte the scale distance we are measuring
const float scale = 76; //this is OO gauge
const float distance = 74; //measured in mm
const float scaleKilometer = 1000000.0/scale;

//This is the only value we actually need to do the calculation
//We could do this calculation on the computer and pass it across
//or store it in EEPROM. it all depends if we expect to always be
//attached to a computer or if we want to run standalone etc.
const float scaleDistance = distance/scaleKilometer;

//the track speed limit in mph
const float speedLimit = 15;

//the time (in milliseconds from the Arduino starting up that the
//first sensor was last triggered
unsigned long time;

void setup(void) {
  //enable output on the digital pins
  pinMode(GREEN_LIGHT, OUTPUT);
  pinMode(RED_LIGHT, OUTPUT);

  //turn on both LEDs to show we are calibrating the sensors
  digitalWrite(GREEN_LIGHT, HIGH);
  digitalWrite(RED_LIGHT, HIGH);
  
  //configure serial communication
  Serial.begin(9600);
  
  //let the user know we are calibrating the sensors
  Serial.print("Callibrating...");
  
  while (millis() < 5000) {
    //for the first five seconds check and store the lowest light
    //level seen on each sensor
    sensor1 = min(sensor1, analogRead(SENSOR_1));
    sensor2 = min(sensor2, analogRead(SENSOR_2));
  }
  
  //the cut off level for triggering the state machine
  //is half the resistance seen during calibration
  sensor1 = sensor1/2;
  sensor2 = sensor2/2;  
  
  //we have now finished callibration so tell the user...
  Serial.println(" Done");
  
  //... and set the signalling to green (i.e. we haven't yet seen
  //anything break the speed limit!
  digitalWrite(GREEN_LIGHT, HIGH);
  digitalWrite(RED_LIGHT, LOW);
}

void loop(void) {
    
  if (state == TRACK_SECTION_CLEAR) {
    //last time we checked the track was clear
    
    if (analogRead(SENSOR_1) < sensor1) {
      //but now the first sensor has been triggered so...
      
      //store the time at which the sensor was triggered
      time = millis();

      //advance into the next state
      state = ENGINE_ENTERING_SECTION;
      
      #if (DEBUG)
        Serial.println("Train entering measured distance");
      #endif

    }
  }
  else if (state == ENGINE_ENTERING_SECTION) {
    //the last time we checked the first sensor had triggered but
    //the second was yet to trigger
    
    if (analogRead(SENSOR_2) < sensor2) {
      //but now the second sensor has triggered as well so...
      
      //get the difference in ms between the two sensors triggering
      unsigned long diff = (millis() - time);

      //calculate scale speed in kph
      //3600000 is number of milliseconds in an hour
      float kph = scaleDistance*(3600000.0/(float)diff);

      //convert kph to mph
      float mph = kph*0.621371;
      
      //report the time and speed to the user
      Serial.print("Speed Trap Record: ");
      Serial.print(diff);
      Serial.print("ms ");
      Serial.print(kph);
      Serial.print("kph ");
      Serial.print(mph);
      Serial.println("mph");
      
      if (mph > speedLimit) {
        //if the speed we calculated was above the speed limit
        //then turn off the green LED and turn on the red one
        digitalWrite(GREEN_LIGHT, LOW);
        digitalWrite(RED_LIGHT, HIGH);
      }
      else {
       //if the speed we calculated was not above the speed limit
       //then turn off the red LED and turn on the green one
       digitalWrite(GREEN_LIGHT, HIGH); 
       digitalWrite(RED_LIGHT, LOW);
      }
      
      //move into the next state
      state = ENGINE_LEAVING_SECTION;
    }
  }
  else if (state = ENGINE_LEAVING_SECTION) {
    //last time we checked both sensors had triggered but both
    //had yet to reset back to normal

    if (analogRead(SENSOR_1) > sensor1 && analogRead(SENSOR_2) > sensor2) {
      //both sensots are now clear so...
      
      //move back to the first state ready for next time
      state = TRACK_SECTION_CLEAR; 
      
      #if (DEBUG)
        Serial.println("Train is clear of measured distance");
      #endif
    }
  }
}

Now that might look like a lot of code but if you remove the code comments there isn't really that much going on. Everything up to line 50 just creates some constants and variables ready for the speed calculations. The setup loop in lines 52 to 86 calibrates the two LDRs: we look at the sensors for five seconds and record the lowest light level we see and set the threshold for triggering at half this value. The main loop method then simply implements the state machine we outlined above using a set of if...else statements.

So the main question is does it work? Testing by simply using my hands to cover the sensors suggested that everything worked well. The last thing to do was to actual time a locomotive. As you can see from this photo it was easy to attach to the track for testing and I can report that it works really well.

There are many ways in which the code and hardware could be improved. Configuring the distance, speed limit and scale without re-compiling would be useful, as would using a small LCD or multi-segment LED to display the speed, but for now, at least, I'll leave those as exercises for the interested reader.

Pixel Level Precision

I've heard a number of people claim recently that Blogger doesn't give you much control over the size of images you can add to your post. Essentially they give you small, medium, large, x-large and original size. In theory this gives you full control as you could resize your images outside of blogger and then display them at original size. The problem with this of course is that you might not know what size would work best until you have uploaded your image. The solution is to switch to the HTML view where, if you know what you are doing, you can have full control over the size of your image right down to the pixel.

So for those of you who would like more control but have never messed around with HTML before I'll walk you through the steps using an image about dead pixels.

I chose to upload the image to be displayed at a small size and to the left. Blogger generated the following HTML to achieve this (note Blogger puts it all on one line but I've separated it out to make it easier to read and refer to):

<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgznpxj-ACnxUWDExnR4EzI4kGQ-JuD2TzBjYMWega3VV51nQZPVQ50jILnN727oC58SDzIIdlTPclNODNu8oLG_OP1V2wM0YJ1FUaxF6JAW9wp3iDcEXgl4eEZcX8oVI780PbsW1yqCec/s1600/dead-pixels.png" imageanchor="1" style="clear:left; float:left;margin-right:1em; margin-bottom:1em">
<img border="0" height="150" width="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgznpxj-ACnxUWDExnR4EzI4kGQ-JuD2TzBjYMWega3VV51nQZPVQ50jILnN727oC58SDzIIdlTPclNODNu8oLG_OP1V2wM0YJ1FUaxF6JAW9wp3iDcEXgl4eEZcX8oVI780PbsW1yqCec/s200/dead-pixels.png" />
</a>
</div>

Essentially Blogger has put the image (line 3) inside a link (starts on line 2 ends on line 4), so that when you click on the image you get a larger version, which in turn is placed inside it's own section (a div tag starting on line 1 and ending on line 5). Fortunately we can ignore everything but the image on line 3.

If you look at line 3 you will see that the width and height of the image are explicitly set to 200 pixels and 150 pixels respectively. Now if you change one of these values you will obviously need to change the other one to ensure that your image maintains the same ratio of width/height. Fortunately you don't actually need to do any maths! I'm not sure why Blogger explicitly sets both dimensions of an image because by default all browsers maintain the aspect ratio of an image when only the width or the height is set. So depending how you want to specify the size you need to delete one of the values and change the other. I usually find it easiest to delete the height and alter the width.

So let's change remove the height and change the width to 400 pixels to give us:

<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgznpxj-ACnxUWDExnR4EzI4kGQ-JuD2TzBjYMWega3VV51nQZPVQ50jILnN727oC58SDzIIdlTPclNODNu8oLG_OP1V2wM0YJ1FUaxF6JAW9wp3iDcEXgl4eEZcX8oVI780PbsW1yqCec/s1600/dead-pixels.png" imageanchor="1" style="clear:left; float:left;margin-right:1em; margin-bottom:1em">
<img border="0" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgznpxj-ACnxUWDExnR4EzI4kGQ-JuD2TzBjYMWega3VV51nQZPVQ50jILnN727oC58SDzIIdlTPclNODNu8oLG_OP1V2wM0YJ1FUaxF6JAW9wp3iDcEXgl4eEZcX8oVI780PbsW1yqCec/s200/dead-pixels.png" />
</a>
</div>

As you can see the image is now larger and still has the right aspect ratio. Success? Not quite yet. What you might not be able to see overly well in this example is that the new larger image isn't actually as sharp as it should be.

To try and make sure that your blog loads as quickly as possible Blogger tries to keep down the amount of data it needs to transfer to your computer. One of the ways it does this is by resizing the images before it sends them to you. If you look again at line 3 you will notice that the end of the image URL looks like /s200/dead-pixels.png. The 200 in the URL tells Blogger which size version of the image to send, in this case 200 pixels along the longest edge. So you now have a 200 pixel wide image being scaled up to 400 pixels wide by the browser which is why it doesn't look as sharp as it should do. Fortunately it is easy to change the URL to give us an appropriate size image.

I'm not sure of the full range of valid sizes but I do know that you can specify any sizes up to 1600 in multiples of 200. So for this example we can specify that we want a 400 pixel wide image:

<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgznpxj-ACnxUWDExnR4EzI4kGQ-JuD2TzBjYMWega3VV51nQZPVQ50jILnN727oC58SDzIIdlTPclNODNu8oLG_OP1V2wM0YJ1FUaxF6JAW9wp3iDcEXgl4eEZcX8oVI780PbsW1yqCec/s1600/dead-pixels.png" imageanchor="1" style="clear:left; float:left;margin-right:1em; margin-bottom:1em">
<img border="0" width="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgznpxj-ACnxUWDExnR4EzI4kGQ-JuD2TzBjYMWega3VV51nQZPVQ50jILnN727oC58SDzIIdlTPclNODNu8oLG_OP1V2wM0YJ1FUaxF6JAW9wp3iDcEXgl4eEZcX8oVI780PbsW1yqCec/s400/dead-pixels.png" />
</a>
</div>

As you can see changing the URL to ask for a more appropriately sized image gives us a much sharper photo as the browser doesn't need to stretch it to the requested size.

The trick of course is to choose an image size that balances the trade off between sharpness and download speeds. I always go for a URL pointing to the smallest image that is the same size or larger than I want to display. This ensures that the browser never has to stretch the image and crushing it downwards doesn't have quite such a dramatic effect on sharpness.

Hopefully all that made sense and will allow those of you who aren't too comfortable editing HTML to have better control over the images you use in your posts. If there was anything that wasn't clear or didn't make sense leave me a comment and I'll try and help.

Fritzing

Over the last few days I've been playing around with my Arduino and have finally got as far as interfacing it with other electronic components rather than just writing software to run on it. Whilst it is easy to store the software side of multiple Arduino projects safely I wasn't entirely sure of the best way of recording the hardware setup -- basically I want to be able to take a snapshot of a project so that at some later date I can recreate it, either because I messed something up or because I'm playing with multiple projects at the same time and reusing components. It turns out that the solution is obvious: Fritzing.

If you look at any of the Arduino tutorial pages (e.g. the basic blink demo) you can see that it contains both a visual representation of the project and a traditional circuit diagram. I'd always assumed that these were produced separately but I was wrong.

These diagrams are all drawn using Fritzing. Fritzing supports three views of a project: breadboard, schematic and PCB. The breadboard view shows a drawing of your project that is almost a photograph of the real thing. The schematic view gives a traditional circuit diagram, and the PCB view allows you to convert a prototype into a PCB that could be manufactured. As far as possible, changes in one view are reflected in the other views.

This means it is easy to document a project simply be recreating it in the breadboard view and allowing Fritzing to generate the circuit diagram for you.

On top of all that the people behind Fritzing have also produced Fritzing Fab: a cheap way of printing custom PCBs. So you can easily prototype an idea with an Arduino, record the prototype in Fritzing, and then generate a permanent version by printing and populating the PCB. I haven't got as far as needing to print a PCB yet, but given how integrated the steps are it could turn out to be really useful. If nothing else expect any future Arduino related posts to include Fritzing generated images.