-
Notifications
You must be signed in to change notification settings - Fork 4
Home
shaladdle edited this page Apr 21, 2013
·
4 revisions
Not sure about all the stuff we should put here. Seems like a good place to at least put progress reports or todo lists or something
- Get a simulator working
- Motion model
- Measurement model
- Visualization (Tim says vypy or something should be easy to do)
- Decide on a method for data fusion
- Manuela suggested DDF-SAM, which is described in the paper SAM/Cunningham10iros.pdf. Also look at SAM/Dellaert06ijrr_SAM_.pdf for a bit of a thorough treatment on how to do SAM as least squares. Lots of lin alg here. I thought section 2 was good for describing how to formulate SLAM as a graphical problem, and section 3 gets crazy but is good for the least squares formulation.
- Tim made progress on ROS, got to turtlesim, but is trying to figure out how to interact with the turtlesim code. Things we would like to be able to do include
- using turtlesim as a display for our simulator
- being able to display landmarks
- being able to display error ellipses (for both landmarks and robot pose)
- Nathan and I worked on the python code some, trying to implement a Kalman filter. We are still figuring out the model.. I think F is ok for now, H needs to be thought about a little more
Still remaining (immediate future)
- Write simulator functions for
- do_motors
- sense
- read_odometry
- Research non-Kalman filter approaches (particle filters, SAM)
Got the simulator a little farther along. do_motors takes a numpy vector (2x1 matrix) and simply adds it to the robot's current position. So for now it just assumes that u is the change in x, y position. Angles are not handled yet. Really we want change in pose as the next step. Other things:
- Got a graphical library, graphics.py, and made it display the robot as a rectangle
- Coordinate frames are still not really handled. We want to take vectors in the math cartesian coordinate frame (x increases to the right, y increases upwards) and translate them to the graphics coordinate frame (x increases to the right, y increases downwards). I am doing this with the motion vector, u, by just multiplying the y component by -1... Haven't thought through if that handles things properly, it might if we just take position into account and not angle.
- Currently there are no landmarks, need to randomly generate landmarks on simulator initialization, or load them from a file. Loading from a file would be useful for having the same test cases to go against
- Once landmarks are implemented in the simulator, we need to figure out what to do about sensor readings. Probably the most sensible thing to do is to take landmarks that are in some visual range of the robot and use those to create the sensor reading that we pass back from
sense()
. - Kalman filter still does nothing. I think the next step should be to get the Kalman filter working with this really simple simulator/system model