Photo of Bernardo Pires

Academic Projects

In approximate reverse chronological order from project end
Fully Automated Robotic Unloader for Distribution Centers - Press Release
Tech. Lead, Perception Lead | Supported by Honeywell Inteligrated | Ongoing, joined April 2018
The robotic unloader uses artificial intelligence to operate fully autonomously inside of a trailer, which significantly reduces the manual effort required to operate receiving docks for retail merchandise and parcel distribution centers. Honeywell's new robotic unloader drives into a trailer or container and uses machine vision to identify various package shapes and sizes as well as the optimal approach to unloading. A robotic arm with a series of small suction cups conforms to the package shape to gently extract it from the stack. A conveyor below the arm can serve as a sweeper for packages to move them out of the trailer.
Pipeline Profiler: A Robotic System for the Internal Inspection of Water and Ore-Slurry Pipes for Mining Operations
Perception Lead | Supported by Anglo American | Ongoing, joined March 2017
The objective of this project is to design, develop, and field-validate Pipeline Profiler (PLP), a robotic crawler system for inspection of variable diameter (20 to 28 inch) pipes used by Anglo American Copper to transport water and ore-slurry over long distances in mining operations. PLP is untethered to minimize size and weight. It carries its own power, computing, and other subsystem hardware. Onboard sensors such as laser rangefinders, stereo cameras and/or structured light provide necessary data as to the internal conditions of the pipe enabling the location of potential burst sites. Data is stored on board and retrieved once the robot is withdrawn from the pipe. Post deployment, data is displayed in an advanced GUI, which allows for fast data playback, 3D reconstruction, and measurement of the pipe.
Latest Generation Data Portal for the Intelligent Mobility Meter - IMM Project Page
PI | Supported by Mobility21 | Ongoing, Started 2019
As the capabilities of the Intelligent Mobility Meter (IMM) mature, it has the potential to reach partners beyond the City of Pittsburgh and the State of Pennsylvania. The objective of this project is to create modern tools for the IMM to engage with state, industry, and non-profit organizations throughout the whole of the US. In particular, this project will focus on 1) the creation of a modern data portal, where participating organizations will be able to submit video data and download the statistics compiled by the IMM algorithms; and 2) further refinement and automation of the IMM tools to support the new data portal.
Applied Vehicle, Bicycle and Pedestrian Automated Counting Technologies for the Intelligent Mobility Meter - IMM Project Page
PI | Winner of the Mobility21 Smart Mobility Challenge | Ongoing, started September 2017
This project proposes to further improve the IMM performance while tackling challenges that affect the local governments in Southwestern Pennsylvania. Specifically, we propose to: 1) collect novel real-world large-scale datasets of visual data that will help further improve the automated detection, classification and tracking algorithms of the IMM; and 2) provide real-world valuable traffic studies and actionable information to local government entities. To achieve this goals the project is partnering with the Municipality of Bethel Park and the City of Greensburg, and with leading engineering firm Michael Baker International. The local governments will guide the deployment of the meter to the locations that are most critical for infrastructure decision making and pledge to deploy the portable meters as necessary for the collection of the data. The CMU research group will use the data to improve the IMM algorithms and to analyze the specific traffic needs and challenges as requested by the municipalities. Finally, Michael Baker International will provide the necessary traffic engineering expertise and guidance so that the project's output will be useful and actionable for the municipalities.
Multimodal Data Fusion for Threat Detection Using Low Count Spectrometry
Perception Lead | Supported by The Department of Homeland Security | 2015-19
One of the most pressing challenges emerging from recent developments in nuclear threat detection is the need for effective algorithms to operate with low-count data. Some of the most pressing gaps in the Global Nuclear Detection Architecture involve the need to detect threats in broad areas and perimeters to protect unattended borders, coastline, marinas, general aviation etc. Many of the solutions to these problems involve low-count data. As smaller gamma-ray spectrometers become available at lower prices, they will likely be used in large numbers to address these problems. The need for advanced algorithms to analyze the low count data will therefore continue to grow. We propose to develop and demonstrate tools for low count analysis that can be applied to a broad range of problems involving nuclear threat. We will explore statistical and machine- learning approaches with both dynamic energy and time binning, and without any binning to avoid information loss and take full advantage of event-mode information. We will also apply advanced computer vision methodology as a platform for interpreting the surroundings of the sensor, so that the available secondary data can support inferences about the low count spectra.
Multivariate Sensing for Mobile Platforms
co-PI | Supported by ChemImage Corporation | 2017-18
The objective of this project is to adapt and apply scene layout methods to multivatiate sensing platforms including visible (RGB), Near Infrared (NIR), and Shortwave Infrared (SWIR) multi-spectral imaging (400nm-1700nm band). Part of this work included large scale data collection of multi-spectral imaging on a moving vehicle, which was applied to learn neural networks for cross-spectral stero matching on challenging surfaces (including glass, glossy surfaces, and light sources) - see more details here. Additionally, this work also produced the first comprehensive RGBN-SWIR Multispectral Database for powder recognition, and demonstrated fine-grained recognition of 100 powders on complex backgrounds - see more details here.
The Intelligent Mobility Meter – Portable Fine-Grained Data Collection and Analysis of Pedestrian, Cyclist, and Motor Vehicle Traffic - IMM Project Page
PI | Supported by Mobility21 | 2017-18
The Intelligent Mobility Meter (IMM) is a portable data acquisition and analysis platform for the collection of fine-grained statistics on pedestrian, cyclist and vehicular traffic. The objective of the IMM project is to provide accurate and actionable data to government officials, transit advocates and industry organizations. IMM uses visual data to provide detailed car, bike and pedestrian counts as well as other statistics such as pedestrian wait-time at intersections; pedestrian, bike, and vehicle flow; vehicle classification; and identification of dangerous behaviors. Entities that already have their own video recording equipment will be able to submit videos directly to the IMM analysis platform. Alternatively, entities will also be provided with the IMM portable data collection platform, which can collect up to 10 hours of visual data.
Construction Zone Speed Detection
PI | Supported by T-SET | 2017-18
This project will develop a Speed Gun App to increase awareness of speeding, particularly in urban and/or construction zones. The objective of the Speed Gun App is to empower government officials and transportation advocates. It is well known that speeding, particularly in urban areas, is extremely dangerous pedestrians and cyclists. However concerned citizens are often powerless to tackle these problems. The Speed Gun App will allow users to obtain the approximate speed of passing cars. Its use is not intended for enforcing, but instead for drawing awareness to localized speeding problems.
Pedestrian Detection for the Surtrac Adaptive Traffic System - Project Page
PI | Supported by T-SET | 2016
Surtac, the real-time adaptive traffic signal control system, has been demonstrated to significantly improve traffic flow on multiple performance metrics, including reductions of 25% of travel time and 40% wait time for motor vehicles. The objective of this project is to bring this same intelligence to pedestrian traffic, which has, thus far, not been targeted by Surtrac deployments. Phase 1 of this one-year project will analyze pedestrian traffic at multiple Surtrac deployments. Phase 2 will focus on an intersection already equipped with Surtrac system in the Oakland / East Liberty region and will add additional sensing and processing capabilities to determine the presence of pedestrians waiting to cross the intersection.
Measuring Pedestrian Wait Times at Intersections - Project Page
PI | Supported by T-SET | 2016
Adaptive traffic lights have the potential to significantly facilitate car travel and reduce congestion. However, other road users, especially pedestrians, may suffer longer wait times if they are not taken into account by the adaptive algorithms. The objective of this project is to bring greater insight into the impact of such smart traffic light systems to the pedestrian flow at key Pittsburgh intersections. The project leverages systems and methods developed in previous T-SET projects to quickly collect relevant statistics to stakeholders in the City of Pittsburgh, Bike Pittsburgh organization, and the Surtrac team.
Automatic Counting of Pedestrians and Cyclists - Project Page
PI | Supported by T-SET | 2015
The goal of this project is to provide actionable data for government officials and advocates that promote bicycling and walking. Although the health and environmental benefits of a non-automobile commute are well known, it is still difficult to understand how to get more people to take up active transportation. Infrastructure can have a dramatic effect on cycling and walking adoption, but represents a significant outlay of government resources. Thus, concrete usage statistics are paramount for assessing and optimizing such spending. This project will create a vision-based cyclist and pedestrian counting system that will allow for automatic and human-assisted data collection and analysis. Unlike traditional non-vision counting methods, our system has the potential for much higher accuracy while providing valuable usage and demographic data that simply cannot be collected by other sensors.
In-Vehicle Vision-Based Cell Phone Detection - Project Page
PI | Supported by T-SET | 2015
According to the NOPUS survey, at any given daylight moment across America approximately 660,000 drivers are using cell phones or manipulating electronic devices while driving. Recently, there has been significant interest into automatic detection of driver distraction. Such research often focuses on the driver's eyes in an attempt to detect gaze direction (determine where the driver is looking at.) The difficulty with such approach is that it either requires active infrared illumination, which can be “blinded” by the sun, or requires significant computation to recognize the driver's face, determine pose, and estimate gaze. Instead of focusing on the driver's eyes, we propose to obtain an overhead or over-the-shoulder view of the car interior with the objective of determining if the driver is holding or using a cell phone or other electronic device.
Multi-Robot/Multi-Sensor for Integrated Autonomous Navigation
Co-PI | Supported by the Korean Agency for Defense Development | 2012-14
Most autonomous vehicles rely on GPS for localization and navigation. However, it is well known that GPS can be jammed or uncertain. This project develops GPS-denied approaches to the localization of unmanned ground vehicles (UGVs) and unmanned aerial vehicle (UAVs) in remote areas with few man-made structures. The primary localization method is based on image matching between UGV, UAV and satellite imagery. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. The developed solution uses a particle filter for localization and demonstrates that vision-based localization can outperform commercial GPS.
Distributed Feature Extraction and Matching For Image Understanding
Co-PI | Supported by the Office of Naval Research | 2012-15
Fast and accurate processing of visual information gathered during a mission is critical for developing correct situational awareness. To generate an accurate picture of the battle space, it is often necessary to compare images coming from diverse sources and multiple locations in the battleground. This project develops methods for matching images across strong movement distortions as well as in the presence of occlusion, clutter, and different illumination and other imaging conditions. When more than one unit is deployed, it is important to combine the visual information obtained by all units, thus allowing for a more complete view or understanding of the battlefield.
Passive Gaze Tracking for Wearable Devices
Tech. Lead | Supported by QoLT and the PIA Consortium | 2012-15
Wearable devices with gaze tracking can assist users in many daily-life tasks. When used for extended periods of time, it is desirable that such devices do not employ active illumination for safety reasons and to minimize interference from other light sources such as the sun. Using visible spectrum images, however, is a challenging problem due to the complex anatomy of the eye, the occlusion introduced by the eyelids and eyelashes, and the reflections introduced by arbitrary illumination. This project created a real-time passive illumination gaze tracking system and explored applications to face and object detection, as well as helmet-based applications for the motorsports industry.
On-line Vision Screening
Tech. Lead | Supported by Highmark Inc through the DHTI. In collaboration with QoLT | 2012
On-line vision screening solutions currently in-market require the consumer to measure the distance that the screen (phone or other) is placed from the eyes of the person undergoing the vision test, which diminishes consumer's desire to utilize such tools. This project removed that technical limitation by creating an iPad app that measures the distance from the user to the device automatically and guides the user through a multi-step vision screening.