Episode 8 - Rik Legault, Director of the Office of Public Safety Research (OPSR) for the Department of Homeland Security (DHS) Science and Technology Directorate (S&T), First Responders Group (FRG)




Smart Government, Safe Communities show

Summary: Interview with Rik Legault, Director of Office for Public Safety Research (OPSR) at the Department of Homeland Security (DHS) Science and Technology (S&T) Directorate, First Responders Group (FRG) About OPSR The office serves as science and technology advisor to the agency, the public, and frontline first responders. OPSR houses the social, behavioral, and economic science capabilities, law enforcement, digital forensics, and protection of national critical infrastructure. The office has a broad mandate with a great deal of need throughout the department and the Homeland Security Enterprise - any state, local, first responder, or public or private entity that contributes to DHS. Their work includes: Evaluation research and support Development of new capabilities by helping people develop new procedures, policies, and techniques based in evidence; and Analysis of those data to improve understanding and operational success What are the challenges of measuring R&D? There is incentive to successfully provide something to someone, but there is not the same amount of understanding if you are trying to make a mission impact. It costs money. If something works properly, then you get credit for it working properly. If it does not work, people are worried about negative impact on their lives and careers. In the past the Department of Justice has required up to 20% of the total budget for projects to be spent on evaluation. At the end of the investment, it is important to make sure that you are not doing harm. It is important to be able to understand what you were getting out of that investment in the real world. Even if the thing you developed does everything it is supposed to do, it may not have the outcomes that you desire or were thinking of when creating it. For example, George Mason has a center for evidence based crime policy. They did a report on license plate reading technology and found that it changed the way officers spent time on the job without having an impact on their clearance rates. How can you think through those outcomes? The objectives need to be understood from the beginning and there can be an increase in randomized trials to understand implementation of technology. You can figure out how the products are used and how users spend their time. You can better understand benefits, detractions, and unintended consequences - positive and negative. Examples at S&T S&T developed a training system for TSA to better identify threats when looking at images of bags. Lots of money was invested in better scanner technology, but none was invested in how people were performing their jobs. We did research with TSA and compared them to non-trained professionals. Our technology helped determine what they were missing when looking at different parts of the bag and provide instant feedback for instructors to improve search. Without increasing time it took to scan, we could have an immediate, long-term 2% increase in accuracy. When you extrapolate, that equates to millions of threats found per year. Most Common Errors with R&D Programs Culture kills - there is a lot of pushback. People need to understand that all of our findings will be provided in context and with recommendations. It is hard to get people to understand that I am here to help. In other areas it is more prevalent - medicine, local policing - there are more incentives to engage with researchers to understand what you are doing well and what you are not. They want to identify their own problems and fix them early. If you can identify the problem, understand what is causing it, and articulate your plan to fix it, you are always much better off than if someone else discovers your problem. Independence and objectivity are important and that leads to credibility with the right expertise. Quick Experiments It is important for evaluators to be involved from the beginning. The evaluation team is thinking entirely about data and measurement for your objective. It works well in spiral development if you involve evaluators from beginning because they can adjust and collect data from the beginning. Collection can be really effectively done ahead of time if you have an evaluation plan. No one thinks about evaluation until after the fact, and then there is no baseline and data is hard to get. Big Data Artificial Intelligence (AI), machine learning, and big data are very popular. They are not new concepts - as the technology develops, thinking about how we can apply technology in a smart way is important. Programs that combine data science and behavioral science are vitally important. It is important to understand that correlation is not causation - causation requires time order. Did the cause come before the effect? Does it involve mathematical correlation? Did you eliminate all other causal factors? Measurement error occurs when you are talking about people because they do not behave like machines. Theory is very important to social science to determine causation - many big data efforts lack coherent theory. Combining people with backgrounds in strongly theoretical fields is important and will help move tech into reasonable use faster. Want to Know More? Vist Firstresponder.gov for products, documents, and summaries of work S&T has also Facebook and Twitter @dhsscitech