Thursday, July 2, 2009

ARCS v0.1 Demo

It's been a while since my last post, but the posts will be coming more and more frequently now that I have a new project up and running. This project, which I have named ARCS (Autonomous Rubik's Cube Solver), was sparked over the weekend when I finally learned how to solve a Rubik's Cube, with some help from a solution manual my dad gave me, the same one he used when he was a kid eons ago. Anyway, after learning how to solve it, I thought that the logical next step would be building a robot that could do it, and the next morning I started to do just that. The video below is the product of only 2.5 days of work, so while it isn't perfect, there are plenty of ways I can think of to make it better. Note that in the video it is not solving the cube autonomously; I gave it the directions and it carried them out. As of right now, ARCS has no way of actually reading the cube, but I plan to enable that with either a color sensor or a webcam (I think a webcam would be pretty tight). I also have not had to code an AI, because there is no color data yet, but I plan on outlining my program using flowcharts, posting it here, and letting my huge fanbase of intellectual giants poke holes in it and offer suggestions (actually Cammy I think you're the only one :-P). Another future improvement will be structural, because at the moment the yellow claw is too wobbly and does not hold the cube steadily enough, resulting in twists != 90 degrees as you can see from the final two twists in the video. I had to do about 30 takes of the video because of this, for when it twists too much or too little, it binds up the cube and jams it later on. If anyone happens to read this, let me know what you think, and I will try to post again soon.

Saturday, July 19, 2008

NXTSegway Revision 0.4 Final: New Design (Again) and PID Controller!

This is the fourth public release of my NXTSegway, and I am quite pleased with the results. I re-re-designed the robot: Dean Hystad informed me on the NXTStep forums of the benefit that a higher center of gravity can give to a balancing robot like NXTSegway. He gave an excellent example: it is much easier to balance a golf club on your finger than a pen. Along with the bigger wheels, as the title says, I also implemented a PID motion control system that solved all my balance problems. The program uses three error components (kp, kd, and ki), multiplies them by three constants, adds them together to form the PID value, scales the PID value to fit in the motors' power range (-100 to 100), and sets the motor power equal to the scaled PID value. The three error components work like this: kp (proportional) changes proportionally to the error, kd (differential) changes proportionally to the rate of change of error, and ki (integral) changes proportionally to the sum of all previous error values; the constant multipliers weight these components based on need. For a complete, technical PID tutorial, check out this site. As you can see from the video below, this controller is highly effective in creating a stable inverted pendulum. Note: if you want to use my source code and have a working robot, you must use a very similar design and adjust the kp, kd, ki, adj, and scale values to fit your robot and its environment. My values (especially the kp value) are extremely high due to the fact that the batteries were low...once I replaced the batteries, there was WAY too much overshoot.

Annotated source code can be found here (4KB)

Thursday, July 10, 2008

NXTSegway Extreme Makeover

Well, as cool as the old NXTSegway looked, with the big wheels and all, I decided that it is time to redesign it for several reasons: 1) The big wheels increased the height as well as the center of gravity, making it harder to pull the robot back to equilibrium, 2) The big wheels actually caused the robot to over-correct and lose precision due to the much larger circumference, and 3) the larger wheels prevented me from putting a bar across the front to hold the motors on laterally, so I had to settle for strapping a green rubber band around the middle, which kept getting in the way. I am going to continue the NXTSegway project with this model, which will hopefully yield better results. If you want a model of it, you can download the .lxf file (Lego Digital Designer file) here. I'll post pictures or video with the release of the next revision.

Amazing Robotic Hand

I was looking around for some inspiration a few weeks ago and I came upon something that really impressed me. At, I saw a video of a robotic hand with three joints that was able to imitate another person's hand. At the time, I was not quite yet back into working with the NXT, and it seemed like it was much too advanced for me to even try to figure out. However, now that I have found an excellent development environment and am steadily improving my skills, I am intrigued by what the robot's developer, named Ramin, was able to do. As the comments in the video help viewers see what is happening, as well as the source code. It uses three hitechnic gyro sensors that are attached to the user's hand, which output the rotation of each joint to the NXT, which integrates that rotation to find the angle. The angle is then sent to the motors, which use a proportional motion controller to calculate the power and direction needed to imitate the human hand. The source code is open to anyone who wants a look and is available here

Monday, July 7, 2008

NXTSegway Revision 0.2

OK...this project has experienced a major breakthrough today, as I have implemented a form of dynamic recalibration of robot's "stable point." As you can see from the videos, the robot moves back and forth, and I was able to use that movement to my advantage. The new program measures the raw light data at the extremes (at the point when the robot is momentarily stopped and ready to move back the other way), finds the average of those extremes, and sets the average as the new target light value. This way, if the robot begins to lean a tiny bit too much in one direction, the target light value is shifted in that direction so as to maintain balance (ideally). However, with this method, there is still no way to overcome any substantial error, only minor error, so if it leans too far it will careen into the ground (as evident in the video below). If you want further explanation, take a look at the commented source code and see if you can give me any suggestions :-)

Source code can be found here (4KB)

Sunday, July 6, 2008

NXTSegway Revision 0.1

This is the first revised edition of my project NXTSegway, which was first released yesterday. As you can see from the video, the robot is a tad bit more stable and, with some extra work in the next couple of weeks, might just rival some of the other segway-bots out there :-) To achieve quicker response time, I replaced the scaled-to-100 light sensor data with the raw, more precise 0-1023 data. I tried to implement some sort of bias into the correction process to overcome the imbalance in the structure itself, but after a couple hours of fruitless work I found that if I calibrate the robot's "vertical" light value while tilting it slightly, the program will compensate for the imbalance without any extra code. Granted, this will need to be changed, but it works for now. I plan on adding some sort of dynamic adjustment of the midValue later this week; it is harder than I had previously thought. Anyway, here is the new video and code:

Source file can be downloaded here (5KB)

Saturday, July 5, 2008

My First Project: NXTSegway Initial Release

Now, before I write anything else, I know this is nothing new, exciting, or revolutionary. However, this idea is what prompted me to get back into Mindstorms - I saw a video of one on Youtube and thought it was cool. This is my first attempt at a self-balancing, two-wheeled robot in ROBOTC (I couldn't do it in NXT-G) using a light sensor. To run it, one must first calibrate it, storing the values of the light sensor when vertical and when tilted to both sides. Then, the robot reads the value of the light sensor continuously, calculating the direction of error and compensating by using motors to rotate the wheels in the direction opposite the error. It is extremely jerky and unstable due to its lack of complexity (I wrote the program today just to see if this simple idea would be enough to stabilize the robot). My next revision will probably include dynamic adjustment of the midValue, which determines when the robot is vertical, because in its present state the robot moves around a lot and the midValue is not adjusted accordingly (the amount of light directly underneath the robot changes as it moves around). My ultimate goal is to purchase a Gyro sensor from and implement some sort of motion control system (probably PID) that would produce (ideally) a very stable robot. Ryo Watanabe did something like this and his project, NXTWay-G, is documented completely at his website. Anyway, here's a video of my sorry robot and the code that runs it:

Source file can be downloaded here (4KB)

Welcome to MADRobotics!

This site will be devoted primarily to my endeavors in the robotics field using LEGO Mindstorms NXT products. New projects, ideas, robots, building instructions, and programs will be documented, as well as my experiences, positive or negative, with new programming environments and third-party sensors. I began Mindstorms two years ago, but at the time did not know much programming, and the language I was familiar with, Visual Basic, was text based. The NXT-G graphical-based software was almost too simple and I was not able to use it to make the robots do what I desired them to do. So, I gave them up for a while and began learning C++ which has helped me out immensely. I just began using my Mindstorms again, using ROBOTC, a programming language and environment developed by Carnegie Mellon University, and I am having a completely different experience than I did before. I hope you enjoy my site as it begins to grow; feel free to leave comments, suggestions, and tips as I will be posting much of my source code here for the public to analyze and revise.