Improving robot vision models for object detection through interaction
Created by W.Langdon from
gp-bibliography.bib Revision:1.8051
- @InProceedings{Leitner:2014:IJCNN,
-
author = "J. Leitner and A. Foerster and J. Schmidhuber",
-
booktitle = "International Joint Conference on Neural Networks
(IJCNN 2014)",
-
title = "Improving robot vision models for object detection
through interaction",
-
year = "2014",
-
month = jul,
-
pages = "3355--3362",
-
abstract = "We propose a method for learning specific object
representations that can be applied (and reused) in
visual detection and identification tasks. A machine
learning technique called Cartesian Genetic Programming
(CGP) is used to create these models based on a series
of images. Our research investigates how manipulation
actions might allow for the development of better
visual models and therefore better robot vision. This
paper describes how visual object representations can
be learnt and improved by performing object
manipulation actions, such as, poke, push and pick-up
with a humanoid robot. The improvement can be measured
and allows for the robot to select and perform the
`right' action, i.e. the action with the best possible
improvement of the detector.",
-
keywords = "genetic algorithms, genetic programming, cartesian
genetic programming",
-
DOI = "doi:10.1109/IJCNN.2014.6889556",
-
notes = "Also known as \cite{6889556}",
- }
Genetic Programming entries for
Juergen Leitner
A Foerster
Jurgen Schmidhuber
Citations