Due Wednesday Feb 19 Due Monday Feb 24, 11pm
Worth: 14% of your final grade
In the first part of this lab you’ll assemble the proximity sensors on your robot: two forward-facing bump switches and two side-facing infrared distance sensors.
You will also add the PandaBoard processor which runs Ubuntu 12.04. The low-level processor (LLP, the Orangutan SVP board, or just the “org”) will now communicate with the pandaboard (the HLP, or just the “panda”) over a short USB cable on the robot. The pandaboard will take the place of the laptop or other external computer you have been using to communicate with the org. You will now connect to the HLP either via a USB-to-serial cable or over the network (wifi or Ethernet).
In the second part of the lab you’ll write code for the HLP to implement a form of local navigation based on the Bug2 algorithm.
In the third part of the lab you’ll write code for the HLP to implement a global navigation algorithm given a map. Your solutions for each part can make use of a library of supporting Java code
The HLP should boot once it receives power and as long as the SD card, which acts like its disk drive, is inserted (please don’t ever remove it after it is first installed). You should see the two LEDs near the SD card flash; you will likely become familiar with the pattern.
Important: once the HLP has booted, it is important to cleanly shut it down. This ensures that the filesystem on the SD card is left in a clean state. The HLP gets power from the LLP, so anytime you shut down the LLP, the HLP will also lose power. (It is ok to reset the LLP, as this does not affect its power supply circuitry.) The correct way to shut down the HLP is to log in to it (see below on different options for that) and run the command
> sudo shutdown -h now
Wait for the LED indicator lights near the SD card to stop flashing. It is then safe to remove power. Or, if you want to reboot the HLP:
> sudo shutdown -r now
We have configured Ubuntu 12.04 on the HLP in a similar configuration to the ohmmkeys (VMWare is not involved here, of course). In place of the long black USB cables you were using to connect from the ohmmkey VM (or your own machine) to the LLP, we have now installed a short grey USB cable that connects the LLP to the HLP. The LLP communication port still appears at /dev/ttyACM1
on the HLP.
It is possible to use the HLP with a standard USB keyboard, mouse, and HDMI or DVI-D (but not VGA) display. The first two can be plugged in to any free USB port on the HLP. The display must connect to the HDMI connector labeled HDMI-1080p
(the one further from the front of the robot). You may use either HDMI to HDMI or HDMI to DVI-D cables (VGA is unfortunately not supported without extra conversion electronics), but beware that the HLP may have configuration issues with some monitors. When using a monitor, it is best to have it connected at boot. We have had good results with 1280x1024 LCD panels.
It is more common to use the HLP in a “headless” mode where we only connect to it over the network—it has both wifi and Ethernet connections—and/or via a serial terminal connected to the RS-232 serial port on the DB-9 connector at the rear of the HLP. Because most computers no longer have true RS232 serial hardware, we provide you with a USB to serial adapter cable. You interact with this again using a terminal program such as kermit or minicom, but be aware that the port name will not be ACM1
here as it was for communicating directly with the LLP. The port name will depend on your system, typically on Linux it will be /dev/ttyUSB0
, and on OSX it will be /dev/tty.usbserial
. The adapter we provide is based on the Prolific PL2303 chipset, which should work without extra drivers at least on modern Linux installations. For other OS you may need to manually install drivers.
You may be familiar with GUI tools, including NetworkManager in Ubuntu, to identify and connect to wireless networks. Normally you interact with NM graphically via an icon in the task tray, but it is also possible to manipulate it from the command line. It runs as a daemon (background service) even when headless.
Please read carefully the information here about connecting the HLP to wifi networks from the command line. Also follow the instructions given in lab on the particulars of using the wifi on our networks.
The USB-wifi dongle we have provided you is not intended to be attached to the HLP, which has its own onboard wifi hardware. Instead, it is to be used with your ohmmkey or other virtual (or physical) machines used for code development and debugging. The dongle enables your development machine (or VM) to connect to the same wifi router as the HLP so that it can see the HLP’s local IP address when the router is using NAT (network address translation) to form a private subnet. It is probably not needed if your development machine is a laptop (whether or not you are using a VM)—in that case just connect your laptop directly to the same network to which your robot is connected.
There are three main ways to “log in” to the LLP:
Determine the IP address and the wifi network to which the HLP is connected. As described here, you can generally do this by pushing the user pushbutton on the HLP for about 1 second. The network name and the IP address will then appear on the LCD in a few sconds. Then, from another computer on the same network (or which can at least “see” the IP address of the HLP), ssh to it:
> ssh USER@IP
Here USER is your username on the HLP and IP is the IP address of the HLP. You can omit the USER@
part if your username on the HLP is the same as your username on the machine from which you are ssh
ing. And you can also try
> ssh USER@ohmmN # N is your group number, USER@ is optional
or
> ssh USER@ohmmN.local # N is your group number, USER@ is optional
which in some cases, depending on configuration of the wifi network, allow you to use the name of your LLP instead of its IP address.
Connect the USB-serial adapter and then run
> minicom usb0
on your development machine, assuming you are running Linux and you have installed the provided minirc.usb0
in /etc/minicom
or as ~/.minirc.usb0
.
Attach a keyboard, mouse, and monitor, and log in graphically. You may need to reboot the HLP with the extra hardware connected.
Log in to the HLP with the “ohmm” account and add users for each of your group members just as you did for your ohmmkey in Lab 0 (though here the command line is probably going to be the way to go).
First, make SVN checkouts in your account on the HLP as described in lab 0.
Also, update any svn checkouts you already have on your ohmmkey or your personal development machines with a command like this:
> cd ~/robotics; svn up ohmm-sw; svn up ohmm-sw-site; svn up g0
Then follow similar instructions to copy the lab 2 framework as for lab 0, but change l0
to l2
in the instructions (do not delete your old l0-1
directories).
Please do not edit the provided files, in part because we will likely be posting updates to them. Any edits you make in them will be overwritten if/when you re-copy them to get our updates.
We have now included our solution for the LLP drive module in robotics/ohmm-sw-site/llp/monitor/drive.c
and robotics/ohmm-sw-site/llp/monitor/ohmm/drive.h
. To use this code, rebuild the monitor and flash it to your LLP. With the LLP connected and powered up (this should be the default if you are running these commands on the HLP), run these commands:
> cd ~/robotics/ohmm-sw/llp/monitor
> make clean; make; make program
If you would like to continue using your lab 1 solution code for the drive module instead of our solution, instead run these commands:
> cd ~/robotics/ohmm-sw/llp/monitor
> make clean; make lib IGNORE_DRIVE_MODULE=1
> cd ~/robotics/gN/l1; make; make program
However, you should be aware that the Java library we provide to talk to the monitor assumes that the drive module commands are implemented exactly as specified. Also, our solution for the drive module includes many additional commands beyond those you were required to write in lab 1. Solutions to future labs may assume that all the drive module commands from our solution are available.
We have also now included a Java library to run on the HLP and communicate with the monitor program running on the LLP. This will let you write Java code for the HLP which, for example, can make a function call like ohmm.motSetVelCmd(0.5f, 0.5f)
instead of manually typing msv 0.5 0.5
into minicom. It also provides an optional scheme layer where you can make the scheme function call (msv 0.5f 0.5f)
to achieve the same result. Almost all of the monitor commands have corresponding Java and scheme function calls.
You first need to build the jar file for this Java library like this:
> cd ~/robotics/ohmm-sw/hlp/ohmm
> make clean; make; make project-javadoc; make jar
The will also generate documentation for the OHMM Java library in robotics/ohmm-sw/hlp/ohmm/javadoc-OHMM
(the top level file is index.html
); or you can view it online here.
Next you can compile the example code we provided to get you started with the lab:
> cd ~/robotics/gN/l2
> make
To run the example code, we recommend using the run-class
script we provide:
> cd ~/robotics/gN/l2
> ./run-class OHMMShellDemo -r /dev/ttyACM1
run-class
is a shell script that uses the makefile to help build a Java command line, including classpath and other flags. (It assumes that there is a suitable makefile in the current directory.) The first argument, here OHMMShellDemo
, is the Java class containing the main()
function, with or without a package prefix (here we could have also used l2.OHMMShellDemo
); when the package is omitted it is inferred from the project directory structure. The remaining arguments, here -r /dev/ttyACM1
, are passed as command line arguments to main()
.
The jarfile generated in ~/robotics/ohmm-sw/hlp/ohmm
will have a name like OHMM-RNNN_YYYY-MM-DD.jar
where NNN
is the SVN revision number of the OHMM library you are using and YYYY-MM-DD
is the current date. A symbolic link will also be made OHMM-newest.jar -> OHMM-RNNN_YYYY-MM-DD.jar
. The jarfile is very inclusive, it packages up
EXT_JARS
in makefile.project
for the current full list)This means that if you want to use your own machine for Java development, all you should need to do is transfer the OHMM jar to that machine and include it in the classpath when you run the java compiler. You can even unpack the jar so you can read the sourcecode and browse the javadoc:
# assuming you have the OHMM jar in the current dir as OHMM-newest.jar
> mkdir OHMM-jar; cd OHMM-jar; jar xvf ../OHMM-newest.jar
# now subdir ohmm contains the OHMM java sourcecode
# and subdir javadoc-OHMM contains the OHMM javadoc
If you actually want to run your code on your own machine and test it (e.g. with the LLP connected using the long black USB cable), you will also need to manually install the native libraries (.so
on linux, .dylib
on OS X, .dll
on Windows) that are required by any of the dependencies. In particular, RXTX requires a native library for serial port communication, and javacv (which is not actually needed for this lab) would need access to the native OpenCV libraries. Where (and how) these should be installed is system dependent.
The demo program uses JScheme to implement a scheme read-eval-print-loop (REPL) command line. You can launch it like this:
> cd ~/robotics/gN/l2
> ./run-class OHMMShellDemo -r /dev/ttyACM1
or like this:
> cd ~/robotics/ohmm-sw/hlp
> ./run-class OHMMShell -r /dev/ttyACM1
or like this:
> java -cp path/to/OHMM-newest.jar ohmm.OHMMShell -r /dev/ttyACM1
First try a command like
> (e 14b)
$1 = 14B
which just asks the LLP monitor to echo back the given byte value 14b. Or run
> (df 1000.0f)
to add a 1000mm forward drive command to the queue and start it (the robot will move!). The b
and f
suffixes force scheme to interpret numeric literals as byte
and float
datatypes; they are necessary so that JScheme can correctly match the scheme call to a Java function. If the suffixes were omitted, the literals would have been interpreted as int
and double
(due to the .0
), respectively, which will not be automatically converted to the narrower types byte
and float
.
Examine all the provided *.scm
and *.java
sourcecode in robotics/gN/l2
, robotics/ohmm-sw/hlp/ohmm
, and robotics/ohmm-sw-site/hlp/ohmm
so you understand what is available and how to use it.
You can use the demo program as a template for your own code. For example, you could copy OHMMShellDemo.java
to L2.java
like this:
> cd ~/robotics/gN/l2
> cp OHMMShellDemo.java L2.java
> svn add L2.java
Then open L2.java
in your editor and follow the instructions in it to customize it. Finally, to compile and run:
> cd ~/robotics/gN/l2
> make
> run-class L2 -r /dev/ttyACM1 # or whatever command line arguments you need
Bring up an OHMMShell and run the following commands to configure the bump switches as digital sensors and the IRs as analog sensors (you may omit the comments):
> (scd io-a0 #t #f) ; sensor config digital - left bump switch on IO_A0
> (scd io-a1 #t #f) ; sensor config digital - right bump switch on IO_A1
> (scair ch-2 1) ; sensor config analog IR - front IR on A2
> (scair ch-3 1) ; sensor config analog IR - rear IR on A3
scd
stands for “sensor config digital” and scair
stands for “sensor config analog ir”; they correspond to monitor comands in the sense
module. io-a0
, ch-2
, etc. are convenience scheme constants which identify particular input/output ports and analog-to-digital conversion channels on the LLP.
It is always essential to configure the sensor inputs like this before you use them. If you are writing Java code, you could write ohmm.senseConfigDigital(DigitalPin.IO_A0, true, false)
instead of (scd io-a0 #t #f)
and ohmm.senseConfigAnalogIR(AnalogChannel.CH_2, 1)
instead of (scair ch-2 1)
.
Try out the bump switches:
> (srd io-a0) (srd io-a1) # sensor read digital
They should read #t when triggered and #f otherwise. Make sure that the left sensor corresponds to the first reading and the right sensor to the second.
Try out the IRs. Set up a reasonable object at a known distance between 8 and 80cm, then run
> (sra ch-2) (sra ch-3) # sensor read analog
They should out the distance in millimeters, plus or minus a few due to noise. Make sure that the front sensor corresponds to the first reading and the rear sensor to the second.
Now you will implement a simplified version of the Bug2 local navigation algorithm covered in L5. The simplifying assumptions are:
We strongly recommend you write all code for this lab in Java that runs on the HLP. If you prefer to use other languages, we will allow that, but you will need to write your own equivalent of the Java interface code we provide. There should be no need to write more AVR C code for the LLP for this lab.
Develop a graphical debugging system that shows a birds eye view of the robot operating in a workspace with global frame coordinates at at least covering
and
. This display must bring up a graphics window that shows the axes of world frame and the current robot pose updated at a rate of at least 1Hz. Arrange the graphics so that world frame
points to the right, world frame
points up, and
is vertically centered in the window (and remember, your graphics must always show at least the minimum world area stated above).
Make sure that this can work over the network, somehow, even when the HLP is headless. We have provided a simple HTTP-based ImageServer and some example code that uses it in robotics/g0/l2/ImageServerDemo.java
. You could extend this code to draw the required graphics which will be sent to a remote client using a standard web browser.
Another option would be to use remote X Window display. Though this can use significant network bandwidth, it requires no special code. Just open a regular window and draw your graphics.
You could also design your own client/server protocol. If you do run a server, whether it speaks the HTTP protocol or some protocol you design, be aware that there is a firewall running on the HLP. You can get info on it here, including how to disable it or poke a hole in it so that your server can take incoming connections. By default the port 8080 should be available for your use.
Remember that the network can be unreliable. Your navigation code (for all parts of the assignment) should continue to work even if the graphical display fails because of network issues (and/or you can have an option to run your code with no graphical display, just text debugging).
Another consideration you should make when designing your debug graphics system, if you are using the provided Java OHMM library, is that there can be only one instance of the OHMM object. See the discussion titled “Thread Safety” in the javadoc.
Write a program for the HLP that implements the Bug2 algorithm subject to the above simplifications, and that uses your graphical debugging system to show its progress. The goal location should not be hardcoded, rather, read the goal
coordinate from the first command line argument in floating point meters.
If your Java class to solve this part is called Bug2
, you should be able to invoke it like this
> ./run-class Bug2 4.3
for a goal at
. You will likely find this more manageable if you break the task into the following parts:
ohmm.senseConfigDigital()
on the appropriate pins, see the discussion above in Sensor Testing).ohmm.senseConfigAnalogIR()
on the appropriate channels (notice the final two characters in that API are I
and R
)). Plot their data points in your debug system. Devise a way to estimate the obstacle wall pose from the data points (we suggest you use the line fitting approach covered in L6).Whether or not you choose to solve the problem as we suggested above, it is a requirement that your debug code show (at least)
It is also a requirement that your program somehow report the following events:
Now you will implement a global navigation algorithm of some type; the visibility graph and free space graph algorithms presented in L7 are reasonable options. You may make the following simplifying assumptions:
Procedure:
Write a program for the HLP that implements your global navigation algorithm by reading in a text map file in the format
xgoal ygoal
xmin0 xmax0 ymin0 ymax0
xmin1 xmax1 ymin1 ymax1
...
xminN xmaxN yminN ymaxN
where each token is a floating point number in ASCII decimal format (e.g. as accepted by Float.parseFloat()
in Java). The first line gives the goal location in meters in world frame. Each subsequent line defines an axis-aligned rectangle in world frame meters (the rectangle sides are always parallel or perpendicular to the coordinate frame axes, never tilted). The first is the arena boundary, and the rest are obstacles. There may be any number of obstacles, including zero. The obstacles may intersect each other and the arena boundary.
Make sure your map file parser can handle arbitrary whitespace (space and tab characters) between values, extra whitespace at the beginning or end of a line, blank lines, values that include leading +
and -
, and values with and without decimal points. And remember the values are in meters.
If your Java class to solve this part is called GlobalNav
and you have a map in the above format stored in a file called themap
, you should be able to invoke it either like this
> ./run-class GlobalNav themap
if you accept the name of the map file on the command line; or like this, if you read the map from the standard input stream
> ./run-class GlobalNav < themap
We will leave most details up to you. However, it is required that you have similar graphical debug code for this part as for local navigation, and that here it must show (at least)
It is also a requirement that your program somehow indicate when the goal has been reached.
You will be asked to demonstrate your code for the course staff in lab on the due date for this assignment (listed at the top of this page); 30% of your grade for the lab will be based on the observed behavior. Mainly want to see that your code works and is as bug-free as possible.
The remaining 70% of your grade will be based on your code, which you will hand in following the general handin instructions by the due date and time listed at the top of this page. We will consider the code completeness, lack of bugs, architecture and organization, documentation, syntactic style, and efficiency, in that order of priority. You must also clearly document, both in your README and in code comments, the contributions of each group member.