I designed and built an Omnidirectional Modular Robot (OMR) from scratch within a few weeks.

OMR is a 4 wheel omniwheel robot with holonomic drive. This means that the robot can move in any direction, nearly instantaneously. Each wheel is positioned 90 degrees from each other. The picture below illustrates the layout of the drive system.

OMR uses a master I2C mast slave setup. The slaves are the 2 microcontrollers controlling the 2 motor drivers using PWM which controls the 4 motors. Each motor has an absolute encoder which is managed by the microcontrollers. The benefit of having this setup is that it is very modular and only requires SDA and SCL signals to communicate with each other. Because it is a master slave system, the master does not need to wait for the slaves to finish working. This means that there will be very little delay and everything can work concurrently. The master can be a Raspberry Pi or any other microprocessor, so that it can it can handle image processing or even other sensors. OMR also has a 2.4GHz tranciever for uses in communication. The diagram below depicts the layout of the control system.

OMR is going to have a cheap form of a LIDAR, a compass, and maybe an IMU. This allows the robot to have a clear understanding of where it is in the enviroment and possibly do SLAM to map out the enviroment. By having an IMU, encoders, and SLAM, the robot can move quickly and accurately and adjust the speed of each wheel correctly even if the contact with the floor is not even on all wheels. Below is a diagram of a robot localizing itself in any enviroment.

OMR has a cheap analog absolute encoder. It uses two grayscale sensors positioned 90 degrees apart and a gradient circle to generate a sine and cosine signal to get the absolution position of the wheel. Below illustrates how this works.

The sensors did not work as easily as expected; however, I did manage to get quite close and I am still working on improving this. The first time the robot runs, each motors spins a few rotations to get the minimum and maximum values. From this this the angle can be calcaulated. Unfortunately the real data taken from the sensor is depicted below.

The data was clearly not good, so I tried to first fix this phsyically. I found a higher DPI printer and printed a better gradient circle and this was the result.

It improved greatly from physically changing, but it still is not good enough. I then improved it in the software side. The result was that it was a lot closer; however, there is a problem in the minimums still.

I used proportional control on the wheels so that the motors can move at a set angle; however, the problem is that the wheel sometimes jitters back and forth. This can be fixed by using integral and derivative control with it. PID will allow teh motors to move to the right angles and then allow me to move the wheels at a set velocity or even acceleration.

I modeled it in Solidworks and then laser cut it on acrylic. I then tapped the holes. Each module has its own laser cut mount. The mounts all use Lego spacing so that lego can also be mounted on the platform. It is very modular.

I plan to use this for TOBOM and implement SLAM. I got the encoders and I2C communication working and I am now working on adding a compass. Below are some images.

My approach was to collect a few data values for the leg length through trial and error and then obtain a function using an exponential fit in matlab. I collected 8 lengths for 8 desired heights and then used exponential fit in matlab to obtain teh function for any desired height. The function obtained was rest_leg_length = 0.5009*exp(0.04473*height_desired)-0.5657*exp(-8.716*height_desired). Before trying an exponential fit I tried linear and polynomial fits; however, the equations were not as good as an exponential fit. Below illustrates the best fit and the result.

My approach was trial and error. I tried numerous numbers and compared time to oscillation. The result I got was hip_air_k = 30 and hip_air_b = 3. The leg swings to the position quickly and has almost no oscillation. if hip_air_b is increase a little more it will decrease the shaking but increase the time. Below is teh graph of the motion.

My approach was to estimate the position the foot will travel through using the time spent on the ground multiplied by the current velocity over two, for the midpoint. However, from testing I found that the position is actaully less than the midpoint therefore I increased value two divide by. I tried numerous numbers and then fit an equation which made the velocity control better. From having the x position and the y position, the angle was found with arc tangent. However, to add horizantal motion, a gain multiplied by the difference from the desired speed was added to attain motion. This value if too small or too big will lead to problems so I had to test this numerous times. I tried having the gain inside or outside the arc tangent. I found that putting it inside made it hop more intensly at the beginning but levels out to the value faster while putting it inside makes it level more smoothly but takes a longer time. I chose to put it outside. The hip torque was simply found with trial and error until the body did not shake or move anymore. Below shows speeds for 1 and 0.7 with a height of 0.6.

My approach was to only use the image file and go through a series of image processing to detect the centroid and orientation of the object. I first converted the RGB image to a grayscale image and then converted it into a binary image. The binary image was then removed of objects that have less than 10000 pixels. After removing all objects less than 10000, the number of object in the image was found and if it were not one the the required size was increased until one object is left, the target. After isolating the target, the image was then dilated to remove specs and clean up the image. After cleaning up the image, the properties of the image was then found for the object in the image. The centroid was returned and the angle was converted to radians that are from -pi to pi.

- Centroid.m file contains the function that return the [x y] position and angle of the object after the object is isolated from the input image. The value 10000 determines the minimum size of the target object, which can be changed if necessary. It displays the processed image with the centroid marked

- The problem with only using an image is that the image can have many problems, such as the object is glowing. If the object were glowing, this will lead to the object looking larger than it actaully is. If it glows evenly then it doesn't effect the centroid; however, if it weren't even that would lead to incorrect centroids and values. Another problem that may happen in measuring objects is that the image may be morphed due to the shape of the lens. This can be fixed through some morphing of the image. The centroid should be really accurate based on the resoltion of the camera and image. It can be accurate up to millimeters. The angle of the object is more difficult since that would require a clear understanding of the shape; therefore, it was off by +/- 1 degree.
- The problem of missing 0 depth values is that the values will have to be determined through a difference which means that the depth may have an overall error.

My approach was to generate a line between the target and the arm and then generate some resolution of solutions for points on the line while slowing turning the angle of the arm till it reaches the target perfectly. If a solution is not found the angle constrain is relaxed unless it is the final solution. If no solution is found it shows the last position point closest to the target while not holding the target. The end result was a smooth motion from the start to the target while adjusting the angle of the hand.

- The constraints file that has the option of making sure the final angle matched the angle of the object.
- For the criterion file I tried adding a bias for the angle of the hand, but that didn't work that well, so I removed it.
- I added a new function that was get_sol which returned the answer based on the target and if the position was final or not. The final was used to change the constrain so that the angle did not have to match the target at times. This file also contains the limits to each joint.
- I added the function inv_k which made the motion smooth. It generated a line between the target and arm and basically got solution for a specific resolution of points while tilting the angle of the hand based on the position. If a solution were not found, the constrain on the angle was supressed to find a solution. inv_k took in the final destination as an input and gave a solution and also says if it is reachable.
- The TEST script has different tests I ran on it and has a loop that runs through all possibilities, which is commented out.

The video below demonstrates the successes and failures of grabbing a randomly placed target.

- The arm can be mapped out by running through all possible positions within a range and plotting it on a 2D scatter plot. The end result would be some kind of circular shape if constraints and angles were not considered. However, with constraints and angles, it gets a lot more complicated. Being able to move to that position does not guarantee that any object at that position can be picked up. Therefore a plot would have to be 3D scatter plot representing of all the points that can be reached with the angle as the third axis. I tried mapping it out with a scatter plot, but it ended up taking a long time it might be better to test points based on polar coordinates since the arm is like the radius.
- By flipping the joint the arm should be able to reach lower down and should be able to reach more angles while losing some angles and not beign able to reach as high as before. There will be a decrease in the positive y direction while and increase in the negative. The x direction should be the same since the arm can still reach the same x position but just flipped around this time. The angles it can reach will change a lot since flipping the join basically means it loses a lot more of its ability to grab in that direction; however, it gains more angles since it can grab the other direction now. Similarly this can be mapped by running through all the possible positions and angles.