Posts

Showing posts from February, 2024

Week 7 Update (2/19-2/23)

Image
This week showed progress in the development of our existing Move cpp node, and the creation of our new MoveWithLidar.xml behavior tree. We did a bit of restructuring to organize all functionality in our node, and logic in our behavior tree, which is proving much more helpful and efficient. We now have logic implemented in place of a basic loop with repeating hard-coded movement commands. Initially, we used a "go until you see a wall, stop, reverse until you see a wall, repeat" logic tree to test out Lidar scope and limitations. Through many trials, we learned a lot about the 360 scope of the lidar, as well as some errors in the structure of our code (ie. mixing logic between our node and bt). We narrowed the scope of the Lidar to make decisions only on input from an 11-1 o'clock range, so a side parallel wall will not trigger as an obstacle.  Additionally, this Monday we also received access to the actual Lidar sensor, and will being testing and implementation during Wee

Week 6 Update (2/12-2/16)

Image
This week showed progress in the continued development of a script for automated movement and 3D mapping. Through lots of research, debugging and troubleshooting, we have successfully created a node "Move" written in C++, which currently publishes pre-planned movement commands to the corresponding Move topic, instead of relying on user-inputted commands from a terminal. Admittedly this does not seem like significant progress in terms of bot movement/intelligence, since that level has remained consistent from the previous week, but eliminating direct user control is actually a big step for the project.  Total hours: 22 hours. Fig 1. node list showing active Move node Fig 2. topic list showing active Move Topic (receiving from Node) Fig 3. Moving from Move Node (no logic yet)    Fig 4. Move Node publishing commands to Move Topic 

Week 5 Update (2/5-2/9)

Image
This week's progress focused on the planning and development of an algorithm for intelligent 3D mapping and path planning. Since simulated 3D mapping with the LiDAR sensor has already been achieved through implementation of various ROS libraries, our focus is on the algorithm that will automate the process, eliminating the need for a user controller. After much research on the topic, we are in the process of developing a behavior tree and ROS node that will control the behaviors of the bot (output to the Arduino/motors) based on the input from the LiDAR sensor.  We began with simple prescribed movements running from a terminal (which will be wrapped in our launch file) to ensure movements are accurate before implementing the logic of the behavior tree. We also confirmed the status of the real LiDAR sensor with the ME/EE group, and we should have access to it this following week. This is going to be integral in our next steps, as we want to begin transfer of code to the Raspberry Pi

Week 4 Update (1/29-2/2)

Image
This week focused on preparing to transition from what stage the project is in, and shifting focus to developing an intelligent path-planning algorithm. With the LiDAR simulation entering a nearly completed phase the team will now begin to start the implementation of planning the best possible path to map a given space that the robot will be placed in. As a result, the focus was mainly on researching what the best route for mapping a room could be -- such as looking at search algorithms like Breadth First Search or Depth First Search; and also discussion on how to implement artificial control onto the simulation model due to it only moving with human input via a keyboard. Lastly, the team met with the Mechanical/Electrical team to discuss when code could get put onto the robot itself. We agreed to work independently for the next 4-6 weeks until the robot is assembled, and then our team will have access to the Arduino/Motors to begin implementing and testing software. Total hours: 15 ho