Photo of Bernardo Pires

Academic Projects

The Intelligent Mobility Meter – Portable Fine-Grained Data Collection and Analysis of Pedestrian, Cyclist, and Motor Vehicle Traffic
PI - Supported by Mobility21 - Ongoing, started July 2017
The Intelligent Mobility Meter (IMM) is a portable data acquisition and analysis platform for the collection of fine-grained statistics on pedestrian, cyclist and vehicular traffic. The objective of the IMM project is to provide accurate and actionable data to government officials, transit advocates and industry organizations. IMM uses visual data to provide detailed car, bike and pedestrian counts as well as other statistics such as pedestrian wait-time at intersections; pedestrian, bike, and vehicle flow; vehicle classification; and identification of dangerous behaviors. Entities that already have their own video recording equipment will be able to submit videos directly to the IMM analysis platform. Alternatively, entities will also be provided with the IMM portable data collection platform, which can collect up to 10 hours of visual data.
Construction Zone Speed Detection
PI - Supported by T-SET - Ongoing, started June 2017
This project will develop a Speed Gun App to increase awareness of speeding, particularly in urban and/or construction zones. The objective of the Speed Gun App is to empower government officials and transportation advocates. It is well known that speeding, particularly in urban areas, is extremely dangerous pedestrians and cyclists. However concerned citizens are often powerless to tackle these problems. The Speed Gun App will allow users to obtain the approximate speed of passing cars. Its use is not intended for enforcing, but instead for drawing awareness to localized speeding problems.
Pedestrian Detection for the Surtrac Adaptive Traffic System - Project Page
PI - Supported by T-SET
Surtac, the real-time adaptive traffic signal control system, has been demonstrated to significantly improve traffic flow on multiple performance metrics, including reductions of 25% of travel time and 40% wait time for motor vehicles. The objective of this project is to bring this same intelligence to pedestrian traffic, which has, thus far, not been targeted by Surtrac deployments. Phase 1 of this one-year project will analyze pedestrian traffic at multiple Surtrac deployments. Phase 2 will focus on an intersection already equipped with Surtrac system in the Oakland / East Liberty region and will add additional sensing and processing capabilities to determine the presence of pedestrians waiting to cross the intersection.
Measuring Pedestrian Wait Times at Intersections - Project Page
PI - Supported by T-SET
Adaptive traffic lights have the potential to significantly facilitate car travel and reduce congestion. However, other road users, especially pedestrians, may suffer longer wait times if they are not taken into account by the adaptive algorithms. The objective of this project is to bring greater insight into the impact of such smart traffic light systems to the pedestrian flow at key Pittsburgh intersections. The project leverages systems and methods developed in previous T-SET projects to quickly collect relevant statistics to stakeholders in the City of Pittsburgh, Bike Pittsburgh organization, and the Surtrac team.
Automatic Counting of Pedestrians and Cyclists - Project Page
PI - Supported by T-SET
The goal of this project is to provide actionable data for government officials and advocates that promote bicycling and walking. Although the health and environmental benefits of a non-automobile commute are well known, it is still difficult to understand how to get more people to take up active transportation. Infrastructure can have a dramatic effect on cycling and walking adoption, but represents a significant outlay of government resources. Thus, concrete usage statistics are paramount for assessing and optimizing such spending. This project will create a vision-based cyclist and pedestrian counting system that will allow for automatic and human-assisted data collection and analysis. Unlike traditional non-vision counting methods, our system has the potential for much higher accuracy while providing valuable usage and demographic data that simply cannot be collected by other sensors.
In-Vehicle Vision-Based Cell Phone Detection - Project Page
PI - Supported by T-SET
According to the NOPUS survey, at any given daylight moment across America approximately 660,000 drivers are using cell phones or manipulating electronic devices while driving. Recently, there has been significant interest into automatic detection of driver distraction. Such research often focuses on the driver's eyes in an attempt to detect gaze direction (determine where the driver is looking at.) The difficulty with such approach is that it either requires active infrared illumination, which can be “blinded” by the sun, or requires significant computation to recognize the driver's face, determine pose, and estimate gaze. Instead of focusing on the driver's eyes, we propose to obtain an overhead or over-the-shoulder view of the car interior with the objective of determining if the driver is holding or using a cell phone or other electronic device.
Multi-Robot/Multi-Sensor for Integrated Autonomous Navigation
Co-PI - Supported by the Korean Agency for Defense Development
Most autonomous vehicles rely on GPS for localization and navigation. However, it is well known that GPS can be jammed or uncertain. This project develops GPS-denied approaches to the localization of unmanned ground vehicles (UGVs) and unmanned aerial vehicle (UAVs) in remote areas with few man-made structures. The primary localization method is based on image matching between UGV, UAV and satellite imagery. This is a difficult problem due to the drastic change in perspective between the ground and aerial imagery and the lack of environmental features for image comparison. The developed solution uses a particle filter for localization and demonstrates that vision-based localization can outperform commercial GPS.
Distributed Feature Extraction and Matching For Image Understanding
Co-PI - Supported by the Office of Naval Research
Fast and accurate processing of visual information gathered during a mission is critical for developing correct situational awareness. To generate an accurate picture of the battle space, it is often necessary to compare images coming from diverse sources and multiple locations in the battleground. This project develops methods for matching images across strong movement distortions as well as in the presence of occlusion, clutter, and different illumination and other imaging conditions. When more than one unit is deployed, it is important to combine the visual information obtained by all units, thus allowing for a more complete view or understanding of the battlefield.
Passive Gaze Tracking for Wearable Devices
Tech. Lead - Supported by QoLT and the PIA Consortium
Wearable devices with gaze tracking can assist users in many daily-life tasks. When used for extended periods of time, it is desirable that such devices do not employ active illumination for safety reasons and to minimize interference from other light sources such as the sun. Using visible spectrum images, however, is a challenging problem due to the complex anatomy of the eye, the occlusion introduced by the eyelids and eyelashes, and the reflections introduced by arbitrary illumination. This project created a real-time passive illumination gaze tracking system and explored applications to face and object detection, as well as helmet-based applications for the motorsports industry.
On-line Vision Screening
Tech. Lead - Supported by Highmark Inc through the DHTI. In collaboration with QoLT.
On-line vision screening solutions currently in-market require the consumer to measure the distance that the screen (phone or other) is placed from the eyes of the person undergoing the vision test, which diminishes consumer's desire to utilize such tools. This project removed that technical limitation by creating an iPad app that measures the distance from the user to the device automatically and guides the user through a multi-step vision screening.