Research

My research focuses on sequential decision-making under uncertainty, active perception, and deep learning for robot vision. These focus areas along with key references are described below. For a full list of my research works, you can see my complete publication list.

Sequential decision-making under uncertainty

Left: Single-agent decision-making. Right: Multi-agent decision-making.

Sequential decision-making under uncertainty involves a single agent or a team of agents. Each agent takes an action \(a\), and then perceives an observation \(z\) that reveals something about the underlying hidden state \(s\) of the system. When actions are taken, the hidden state undergoes a stochastic state transition. The observations are also noisy, and provide only partial knowledge about the state. A control policy \(\pi\) instructs an agent how it should select its next action given the past actions and observations. The objective is to find a control policy for each agent that maximizes a utility over a horizon of multiple decisions. This problem may be formalized as a partially observable Markov decision process (POMDP) in the single agent case, and as a decentralized POMDP (Dec-POMDP) in the multi-agent case.

In particular, I am interested in tasks involving active information gathering, where the objective is to maximize the amount of information about the hidden state. The amount of information can be quantified by using information-theoretic quantities such as entropy or mutual information on the belief state \(b\), which is a probability distribution over the hidden state. I have developed theory and algorithms for multi-agent active information gathering in Dec-POMDPs (Lauri et al., 2020; Lauri et al., 2019) and applied the same methodology to multi-robot target tracking (Lauri et al., 2017). I also formulated robotic exploration as a POMDP and applied it to a real robot exploring an unknown environment (Lauri & Ritala, 2016).

Active perception

Left: Active visual object search. Middle: Scene reconstruction using multiple cameras (Image courtesy of Joni Pajarinen). Right: Robotic exploration.

An agent capable of observing the world around it can select when, where, and how to use its sensors to obtain more information about its environment or to explore it. This process of planning how to use perception resources is known as active perception.

I am especially interested in applications of active perception in mobile robotics. In our research work on active perception, we have applied deep reinforcement learning to help a robot locate a target object by searching in different types of apartments (Schmid et al., 2019), and greedy submodular maximization to find next-best-views for scene reconstruction using multiple cameras (Lauri et al., 2019).

We used simulation-based planning to find trajectories for environment exploration (Lauri & Ritala, 2016), and found it to outperform classical frontier exploration techniques in certain environments. The video below summarizes this work.

Robot vision

Left: Object pose estimation. Right: Attention-based RGBD segmentation. Images courtesy of Ge Gao.

Robots equipped with cameras face a variety of challenging conditions that affect their visual perception. Adverse environmental conditions in uncontrolled environments affect the quality of captured images. Unconventional viewpoints, high spatio-temporal dependency between captured images, and the opportunity for interaction with the environment also distinguish robotic vision from conventional computer vision applications.

I am interested in the active perception and interaction aspects of robotic vision, using multiple sensing modalities such as RGB and depth perception, and the role of uncertainty in robot vision. We have developed a deep learning method for estimation of object poses from point cloud data (Gao et al., 2020), attention-based segmentation of RGBD data (Gao et al., 2017), and a segmentation method for object discovery applying non-parametric Bayesian techniques for uncertainty quantification (Lauri & Frintrop, 2017).

References

  1. Lauri, M., Pajarinen, J., & Peters, J. (2020). Multi-agent active information gathering in discrete and continuous-state decentralized POMDPs by policy graph improvement. Autonomous Agents and Multi-Agent Systems, 34(42). https://doi.org/10.1007/S10458-020-09467-6
    [DOI] [Code] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  2. Lauri, M., Pajarinen, J., & Peters, J. (2019). Information Gathering in Decentralized POMDPs by Policy Graph Improvement. Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 1143–1151. https://dl.acm.org/doi/abs/10.5555/3306127.3331815
    [URL] [arXiv] [Code] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  3. Lauri, M., Heinänen, E., & Frintrop, S. (2017). Multi-Robot Active Information Gathering with Periodic Communication. IEEE Intl. Conf. on Robotics and Automation (ICRA), 851–856. https://doi.org/10.1109/ICRA.2017.7989104
    [DOI] [arXiv] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  4. Lauri, M., & Ritala, R. (2016). Planning for robotic exploration based on forward simulation. Robotics and Autonomous Systems, 83, 15–31. https://doi.org/10.1016/j.robot.2016.06.008
    [DOI] [arXiv] [Code] [Video] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  5. Schmid, J. F., Lauri, M., & Frintrop, S. (2019, November). Explore, Approach, and Terminate: Evaluating Subtasks in Active Visual Object Search Based on Deep Reinforcement Learning. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). https://doi.org/10.1109/IROS40897.2019.8967805
    [PDF] [DOI] [Code] [Video] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  6. Lauri, M., Pajarinen, J., Peters, J., & Frintrop, S. (2019, June). Approximation of joint information gain for multi-sensor volumetric scene reconstruction. 2nd Workshop on Informative Path Planning and Adaptive Sampling.
    [PDF] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  7. Gao, G., Lauri, M., Wang, Y., Hu, X., Zhang, J., & Frintrop, S. (2020, June). 6D Object Pose Regression via Supervised Learning on Point Clouds. Proc. IEEE Intl. Conf. on Robotics and Automation (ICRA).
    [arXiv] [Code] [Video] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  8. Gao, G., Lauri, M., Zhang, J., & Frintrop, S. (2017). Saliency-guided Adaptive Seeding for Supervoxel Segmentation. IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 4938–4943. https://doi.org/10.1109/IROS.2017.8206374
    [DOI] [arXiv] [Code] [Show/hide BibTeX] [Copy BibTeX to clipboard]
  9. Lauri, M., & Frintrop, S. (2017). Object proposal generation applying the distance dependent Chinese restaurant process. Scandinavian Conference on Image Analysis (SCIA), 260–272. https://doi.org/10.1007/978-3-319-59126-1_22
    [DOI] [arXiv] [Code] [Show/hide BibTeX] [Copy BibTeX to clipboard]