Navigation Meathod
#1
Junior Member
Thread Starter
Join Date: Jan 2010
Location: St. Johns,
NL, CANADA
Posts: 3
Likes: 0
Received 0 Likes
on
0 Posts
Navigation Meathod
First of all, thank you to anyone who takes the time to offer me some advice.
Background:
A friend and I are currently working on a autonomous robot that will have the abililty to navigate the University Campus.
The robot body and movement system is completed with a Phidgets motor controller operating 4 DC motors as the wheels.
A laptop accepts input from a remote the computer and we are manually able to naviage at this point. A kinect sensor
is also installed at the front but does not operate at this time.
Question:
We have come to a stand still as to how and give the robot autonomous functionality.
He suggests we simply use distance senses on the robot to create a 2D map that the robot will use
and the kinect system will runindependently creating a point cloud, and then 3D map.
I on the other hand believe we should use the kinect sensor at all times. I feel that a 2D system will get lost when
creating and navigating maps. However, a always 3D system would be able to locate it's self in space as well as
constantly adapt to changes.
Could someone please help push us in one direction or another. If anyone is aware of a project with detailed information
regarding their process that would be wonderful. However, ALL information and advice would be valuable and very very much appreciated.
Thank you kindly,
Brandon King
Memorial University of Newfoundland
<br type="_moz" />
Background:
A friend and I are currently working on a autonomous robot that will have the abililty to navigate the University Campus.
The robot body and movement system is completed with a Phidgets motor controller operating 4 DC motors as the wheels.
A laptop accepts input from a remote the computer and we are manually able to naviage at this point. A kinect sensor
is also installed at the front but does not operate at this time.
Question:
We have come to a stand still as to how and give the robot autonomous functionality.
He suggests we simply use distance senses on the robot to create a 2D map that the robot will use
and the kinect system will runindependently creating a point cloud, and then 3D map.
I on the other hand believe we should use the kinect sensor at all times. I feel that a 2D system will get lost when
creating and navigating maps. However, a always 3D system would be able to locate it's self in space as well as
constantly adapt to changes.
Could someone please help push us in one direction or another. If anyone is aware of a project with detailed information
regarding their process that would be wonderful. However, ALL information and advice would be valuable and very very much appreciated.
Thank you kindly,
Brandon King
Memorial University of Newfoundland
<br type="_moz" />