Center for Applied Artificial Intelligence

Where cutting edge technology meets 
translational science.

The Center for Applied Artificial Intelligence functions within the University of Kentucky Institute for Biomedical Informatics to explore new technologies, foster innovation, and support the application of artificial intelligence in translational science. 

Recent Projects

Contributor

First released in November 2023 Grok-1 is a new autoregressive transformer-based model with a context length of 8,192 tokens. On March 17, 2024 Grok-1 weights and demonstration code were publicly released under the permissive Apache 2.0 license. The weights checkpoint is packaged in 770 files (tensor00000_000 – tensor00769_000) with a total size of 296GB. Here are the results of running the run.py from the official repository, which ask the model the question “The answer to life the universe and everything is of course”: INFO:rank:Initializing mesh for self.local_mesh_config=(1, 8) self.between_hosts_config=(1, 1)... INFO:rank:Detected 8 devices in mesh INFO:rank:partition rules: INFO:rank:(1, 256, 6144) INFO:rank:(1, 256, 131072) INFO:rank:State sharding type: INFO:rank:(1, 256, 6144) INFO:rank:(1, 256, 131072) INFO:rank:Loading checkpoint at ./checkpoints/ckpt-0 INFO:rank:(1, 8192, 6144) INFO:rank:(1, 8192, 131072) INFO:runners:Precompile 1024 INFO:rank:(1, 1, 6144) INFO:rank:(1, 1, 131072) INFO:runners:Compiling... INFO:rank:(1, 1, 6144) INFO:rank:(1, 1, 131072) INFO:runners:Done compiling. Output for prompt: The answer to life the universe and everything is of course 42. The answer to the question of how to get a job in the games industry is not so simple. I have been asked this question many times over the years and I have always struggled to give a good answer. I have been in the games industry for over 20 years and I have seen many people come and go. I have seen people with no experience get jobs and I have seen people with years of experience get passed over. There is no one answer Now, lets try something a bit more interesting, like a […]

Contributors

A multi-modal machine learning system uses multiple unique data sources and types to improve its performance. We propose a system that combines results from several types of models, all of which are trained on different data signals. As an example to illustrate the efficacy of the system, an experiment is described in which multiple types of data are collected from rats suffering from seizures. This data includes electrocorticography readings, piezoelectric motion sensor data, and video recordings. Separate models are trained on each type of data, with the goal of classifying each time frame as either containing a seizure or not. After each model has generated its classification predictions, these results are combined. While each data signal works adequately on its own for prediction purposes, the significant imbalance in class labels leads to increased numbers of false positives, which can be filtered and removed by utilizing all data sources. We will demonstrate that, after postprocessing and combination techniques, classification accuracy is improved with this multi-modal system when compared to the performance of each individual data source. Link to Full Paper: https://arxiv.org/abs/2402.00965

Contributor

Recent studies show that transformer models numerous contribution on several tasks such as classification, forecasting and segmentation. This article is based on definitions in the paper ‘Attention is all you need’. After the publication of the ‘Attention is All You Need’ paper, transformer models gained significant popularity and their usage in scientific studies has been rapidly increasing. In this article, we will explore how we can modify a basic transformer model for time series classification task and, understand the basic underlying logics on self-attention mechanism. First, we can check out our dataset we will use for training our model. It is an available dataset provided by physionet.org and you can download easily the files ‘mitbih_test.csv’ and ‘mitbih_train.csv’ files via the link below. https://www.kaggle.com/datasets/shayanfazeli/heartbeat?select=mitbih_train.csv The training dataset contains more than 87000 examples processed in rows, and each example or row consists of 188 columns, the last of which is the label for the classes. It is time to talk about the modifications I have made to convert this transformer model from a model processing a language translation to a signal classifier. You can find the model using for language translation via the link below. Transformer: PyTorch Implementation of "Attention Is All You Need" https://github.com/hyunwoongko/transformer 327 forks. 1,952 stars. 10 open issues. Recent commits: remove .idea, hyunwoongko Revert changes of embedding, hyunwoongko Revert changes for masking, hyunwoongko Delete retrain.py, GitHub Delete test.py, GitHub Also, you can compare the modification I have on this model via the link below will redirect you in […]

Contributors

Two months ago, we purchased a robot called the temi 3 (available here). This robot features a large touch screen, LIDAR for obstacle avoidance, and text-to-speech/natural language processing/automatic speech recognition included without the need for setting up or training. The tablet attached to the top of the robot runs on Android and controls all functions of the robot (through an open-source SDK). Temi demos and videos are available on the Robot Temi YouTube channel. After learning about all the functionality that is baked in to this robot and the vast documentation of its SDK, we were confident that we can use this to develop applications in our environment. There are two projects that we are interested in working on.Both projects involve the smell inspector device described in this post to classify smells. Analyzing Smells in Hospital Rooms First, we want to detect if urine or stool was present in a patients hospital room. This is in early development and testing of this is taking place within our own office. Temi comes with “patrol” mode, which visits all predefined waypoints on a floor map. To allow temi to roam freely, the robot needed to be lead around our floor to get a map of the area and to add waypoints (e.g. Cody’s office, Snack Room, Conference Room). Once this was complete, we can programmatically start a patrol using the SDK. After mapping and patrolling was set up, we needed a way to mount the smell inspector sensor to the robot. Dmitry […]

Contributors

Acknowledgements This project is a collaboration between the Center for Applied AI and the following researchers: Dr. Yuan Wen (Institute for Biomedical Informatics, Department of Physiology) Dr. Laura Brown (Department of Physiology) Dmitry “Dima” Strakovsky (College of Fine Arts) Background Volatile organic compounds (VOCs) are chemicals that can easily evaporate at room temperature and are commonly found in various products and materials such as paints, cleaning supplies, fuels, and solvents. The measurement of these compounds has been evolving over decades. Today, thanks to advancements in collection techniques, collection devices fit in the palm of your hand and can detect trace amounts of VOCs. These devices are used for many applications, but we are interested in their applications to healthcare. Specifically, can we collect breath samples to create a machine-learning model to predict a patient’s blood glucose accurately? The VOC sensor device we use is named the “Smell Inspector” and is described in the “Smell Inspector” section below. We aim to 1) create a dataset with blood glucose measurements and 2) infer the blood glucose measurement from a breath sample in real-time. Introduction This project is in the early phases. However, we have begun to collect breath samples from 4 volunteers in our office. None of these volunteers have diabetes and do not have access to a measurement device for blood glucose, so we decided to test the effectiveness of the VOC sensor by classifying peppermint breath vs. normal breath. By doing this, we can see how sensitive the smell inspector is […]

Contributor

In 2020, Kentucky had the third-largest drug overdose fatality rate in the United States, and 81% of those deaths involved opioids. The purpose of this project is to provide research support to combat the opioid epidemic through machine learning and forecasting. The goal is to provide accurate forecasts based on different geographical levels to identify which areas of the state are likely to be the most “high risk” in future weeks or months. With this information, adequate support could be prepared and provided to those areas with the hope to treat victims in time and reduce the number of deaths associated with opioid-related incidents. The first step was to analyze what geographical level would be most appropriate for building and training a forecasting model. We had EMS data containing counts of opioid-related incidents based on six different geographical levels: state, county, zip code, tract, blockgroup, and block. Through experimentation, it was found that the county level is likely the most appropriate scale. State level is too broad for useful results, while any level smaller than zip code proved to be too sparse. Machine learning models can rarely perform well when training on data that consists of mostly zeroes, and smaller geographical levels contain too few positive examples of incidents for any model to successfully learn the trends of each area. Additionally, the temporal level was chosen to be at the monthly scale, rather than yearly or weekly, due to early testing results suggesting the best performance at monthly levels. Even […]

Contributor

In 2021, our team started development of a LoRaWAN network (see link) to cover the University of Kentucky campus/hospital and most, if not all, of Lexington (see link). This project is still active as we continue to develop new applications, but creating infrastructure to support this network has become a challenge. Fortunately, a new offering from Amazon called Sidewalk has recently begun rolling out, which provides this infrastructure (see link). Amazon Sidewalk blurb: Sidewalk is a low-bandwidth, long-range wireless network, which aims to enhance the connectivity of smart devices within neighborhoods and cities by utilizing the existing infrastructure and creating a shared network. The network operates on a portion of the 900 MHz spectrum, allowing devices to communicate over longer distances compared to traditional Wi-Fi networks. Amazon Sidewalk utilizes a combination of Bluetooth Low Energy (BLE) and the 900 MHz spectrum to extend the range of compatible devices, such as smart locks, outdoor lights, and pet trackers. By leveraging the Sidewalk bridge, which acts as a gateway device, the network connects to the internet through a user’s home Wi-Fi network. However, it’s important to note that Amazon Sidewalk operates by using a small portion of a user’s internet bandwidth, which is shared with nearby devices, including those owned by other Sidewalk users. (Protocol reference here) This entails that a specific subset of Amazon devices, including Alexas, Echos, and Ring Cameras, can be utilized to receive Bluetooth, FSK, and LoRaWAN transmissions from sensor devices, offering extensive coverage (specifically within Lexington, KY, […]

Contributor

In early April 2023, Meta AI released Segment Anything (SAM), an machine-learning based segmentation model. The repository model of SAM operates on a very general image database, so we have been re-training SAM to specifically process mammograms and identify any abnormalities within. In 2020, it is estimated that there were roughly 2.3 million new cases of breast cancer, and one of the detection methods is using mammograms to visualize potentially cancerous abnormalities. The goal is to train SAM to automatically detect and annotate abnormalities in mammograms with the intent of processing mammograms with greater than current accuracy and speed. The repository version of SAM is a parameterized predictive model, which only uses the information provided by Meta AI to create parameters which guide SAM to identifying and segmenting different image components. Currently, we are working on training SAM specifically onmammograms so we can add and change parameters to more specifically focus on breast cancer detection. The expectation is that SAM will soon be able to identify abnormalities in a mammogram, soon after annotate those abnormalities to determine what they are (cancer, mineral deposits, healthy tissue, etc.). As we progress, the expectation is that after specifically identifying cancer or cancer-related abnormalities in mammograms, the SAM model can be expanded to other tissues and their screening for cancer.

Contributor

We developed a program that visualizes the raw input data and ML-based smell detection analysis of the SmartNanotubes Smell Inspector. The Smell Inspector is based on electronic nose (E-nose) technology that uses nanomaterial elements to detect odors or volatile organic compounds (VOCs). Classification of smells occurs through pattern recognition algorithms incorporated as trained ML models. There are diverse potential applications particularly in a health care setting, such as disease detection through breath sampling. The program consists of a user-friendly GUI application made using the Python Tkinter library. It continuously checks the sensor’s connection status, allows the user to initiate the sensing process, and displays the raw signals on a bar plot as well as the probabilities of the detected smells updating in real-time. The current program uses a neural network trained model to detect the smell of coffee. As we progress, we plan to improve the quality of the interface and expand the range of trained models to encompass a wide range of scent classifications.

Contributors

Clinicians often produce large amounts of data, from patient metrics to drug component analysis. Classical statistical analysis can provide a peek into data interactions, but in many cases, machine learning can provide additional insight into new features. Recently, with the boom of new artificial intelligence models, these clinicians are more interested in applying machine learning to their data. However, in many cases, they may not possess the necessary knowledge and skills to effectively train and infer a model. Fortunately, using AutoML techniques and a user-friendly web interface, we can provide these clinicians with a way to automatically train tabular data on many different machine learning models to find which produces the best results. Therefore, we present CLASSify as a way for clinicians to bridge the gap to artificial intelligence. Even with a web interface and clear results and visualizations for each model, it can be difficult to interpret how a model achieved its results or what it could mean for the data itself. Therefore, this interface can also provide explainability scores for each feature that indicates its contribution to the model’s predictions. With this, users can see exactly how each column of the data affects the model and could gain new insights into the data itself. Finally, CLASSify also provides tools for synthetic data generation. Clinical datasets frequently have imbalanced class labels or protected information that necessitates the use of synthetically-generated data that follows the same patterns and trends as real data. With this interface, users can generate entirely new […]

Contributor

  Segment Anything is a segmentation algorithm created by Meta Research. In order to try and make segmentation of medical images available to UK Hospital staff, a web interface which allows for the layperson to interact with segmentation should be utilized. Meta Research provided a sample web interface which precompiled segmentations automatically, but did not feature their correction or manual segmentation features. From there, however, the open source community began to tinker and we now have Segment-Anything-WebUI which features a more robust toolset for the segmentation of images in the browser without needed to precompile any of the segmentations for view. Additionally, it allows you to upload local files to be segmented, then save the segmentations as JSON objects. This repository was the basis of the version we have developed at the Institute for Biomedical Informatics. Accessing the Application The web application is available in two forms. The first form is through the hub site, which is hosted on University of Kentucky systems and is intended to assist in the annotation of medical images as well as the training of more useful and impressive model checkpoints for Segment Anything which will improve annotation with the goal of automatic or single-click annotation. The second form is downloading and building the repository on your own local machine. Instructions are available in the repository readme for building and running the site. How It Works Upload A File: opens a file browser and allows you to upload an image to segment. The image must […]

Contributor

We developed an AI model for detection of ultrasound image adequacy and positivity for FAST exam (Focused Assessment with Sonography in Trauma [1]). The results are accepted for publication in Journal of Trauma and Acute Care Surgery. We deployed the model (based on Densenet-121 [2]) on an edge device (Nvidia Jetson TX2 [3]) with faster-than-realtime performance (on a video, 19 fps versus expected 15 fps from an ultrasound device) using TensorRT [4] performance optimizations. The model is trained to recognize adequate views of LUQ/RUQ (Left/Right Upper Quadrant) and positive views of trauma. The video below demonstrates the model prediction for the adequacy of the view. The device can be used as a training tool for inexperienced Ultrasound operators to aid them in obtaining better (adequate) views and suggest probability of positive FAST test. The project is done in collaboration with University of Kentucky Department of Surgery. The annotated data is provided by Brittany E Levy and Jennifer T Castle. [1] https://www.ncbi.nlm.nih.gov/books/NBK470479/ [2] Huang G, Liu Z, van der Maaten L, Weinberger KQ. Densely Connected Convolutional Networks. Proceedings – 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017. 2016;2017-January:2261-2269. DOI: 10.48550/arxiv.1608.06993 [3] https://developer.nvidia.com/embedded/jetson-tx2 [4] https://developer.nvidia.com/tensorrt

Contributor

Blockchain technology is a still emerging field that is seeing increased usage beyond its conception in decentralized banking. One field where that poses a unique challenge for blockchain adoption is healthcare, where data security of Protected Health Information (PHI) is of utmost importance. This project, which leverages The Hyperledger Fabric blockchain framework, seeks to design a system for managing pharmaceutical supplies among participating healthcare entities. Fabric’s permissioned design and multi-signature transaction confirmation ensures that no individual participant in the network may finalize a transfer of information, and only known and vetted participants may govern the network. This makes it possible for the system to be maintained by authorized individuals within the various healthcare domains, while also avoiding the downsides of centralized record keeping. Leveraging smart contract capabilities allows for the development of functions that interact with the network when the proper parameters are met. This can be done in Go, JavaScript, or Java, and will provide the logic that drives the user experience. A web or mobile application can be developed that would act as an interface for a pharmacist to log and approve a transfer of a given quantity of controlled substance to a patient, and a method of that patient logging their daily intake. The modular design of Hyperledger Fabric allows for the modification of its consensus, multi-signature, and permission systems as necessary, making it an ideal system to adapt and adjust as necessary. Network Architecture: A Fabric network follows a Organization-Peer format. For this implementation, an organization […]

Contributor

Time series forecasting is the process of analyzing historical data in order to draw conclusions and predict future outcomes. For this project, we explore, compare, and contrast different methods and models of forecasting to determine the advantages and disadvantages of these different systems. We explore, through research papers and code documentation, the commonly used existing methods for working with time series data and present the methods that will be focused on. At regular intervals throughout this project, we will provide presentations covering the results of the different models and the takeaways from working with each method. The models used for this project have been found from research into what methods are most commonly used for time series forecasting. These models have included ARIMA, Exponential Smoothing, Theta, TBATS, Regression, Recurrent Neural Networks, the Temporal Fusion Transformer, and others. These models were tested on multiple types of datasets, including simple univariate and multivariate datasets tracking sales of different products, as well as bigger and more complex datasets, such as one tracking energy usage in a household, along with covariates of weather and temperature. Below are the results of the Temporal Fusion Transformer on this dataset, with the Root Mean Square Error given at the top. Included below are a full paper and presentation detailing the topic and the achieved results. More work will be done in the future to implement these methods into other projects. PowerPoint: Time Series Presentation Paper: Final Time Series Report

Contributor

The Philips iSyntax format is not directly supported by common open source digital pathology libraries.  Currently, in order to use iSyntax files outside of the Philips environment one must either use the iSyntax SDK or convert iSyntax files to a supported format such as TIFF.  The iSyntax SDK Terms & Conditions are problematic for open tools and the conversion of files is very computationally and storage expensive. Our team, lead by Alexandr Virodov, is on the lookout for collaborators with an interest in developing iSyntax support for OpenSlide, Bio-Formats, and cuCIM libraries.  Efforts to support iSyntax in OpenSlide are already well underway thanks to supporting iSyntax code developed by Pieter Valkema as part of amspath/slidescape.  The following video demonstrates the support of iSyntax files in Digital Slide Archive (DSA) using our OpenSlide code: https://medicine.ai.uky.edu/wp-content/uploads/2022/08/isyntax_in_dsa.mp4   If you have interest in this effort, including beta testing this code, please let us know (Alexandr.Virodov@uky.edu or cody@uky.edu).  Our intent is to initially push iSyntax support into mainline OpenSlide and then expand to other libraries as needed. This work was presented at the AMIA 2023 Informatics Summit: Abstract, Slides. (The test slide file used is obtained from https://gitlab.com/BioimageInformaticsGroup/openphi, another project aiming to address iSyntax support using the Philips SDK.)

Contributor

Abstract – Many works in biomedical computer science research use machine learning techniques to give accurate results. However, these techniques may not be feasible for real-time analysis of data pulled from live hospital feeds. In this project, different machine learning techniques are compared from various sources to find one that provides not only high accuracy but also low latency and memory overhead to be used in real-world health care systems. Code: GitHub arXiv Link: Survey of Machine Learning Techniques To Predict Heartbeat Arrhythmias

Contributor

Introduction: The Institute for Biomedical Informatics is always looking for ways to improve the efficiency of our hospital system. One way to do this is through the transportation of clinical specimens from Chandler Hospital to Shriner’s Hospital and Good Samaritan Hospital using AI pathfinding and collision avoidance. By training a model for a drone or ground vehicle, we seek to incorporate intelligent robotics into the delivery system for the UK hospital system. Technology Overview: The current simulation tool is Microsoft AirSim, visualized through Unreal Engine, which allows us to simulate cars and drones in a digital twin of UK’s campus. This software allows for training of the vehicles to avoid collisions and follow paths through complex environments. The goal is to train drones and ground vehicles to intelligently, efficiently, and safely transport specimens between locations on campus. Current Efforts: – Creating a functional environment with collision detection utilizing LIDAR data of UK’s campus and hospitals – Building a model for path following and collision detection that can be applied to a real world robot

Contributor

Abstract– As a relatively new technology, Blockchain’s applications are still being explored in many fields. Its usage for the secure storage of data makes it an ideal candidate to update the modern healthcare communication system. Currently, hospitals in the United States have no good way to securely communicate important patient data between facilities. Using Python and Hyperledger Iroha, this project looks into the applications of Blockchain to manage access to healthcare data and improve healthcare for patients and providers. Link to Paper: Blockchain and Healthcare Communication As of May 2 2022, a demo of the current iteration is available on YouTube. This is a project in progress, with plans for continued work. The source code in Python is available on GitHub.
omniverse

Contributor

Introduction: An issue for our healthcare facilities as well as many others around the country is better understanding the needs of patients. If a patient has experienced a form of trauma, talking to a doctor face-to-face may be difficult. This is where the NVIDIA Omniverse can help doctors extract vital information that they would otherwise miss. Chatting with a non-threatening animated avatar (shown in this NVIDIA Audio2Face demo) may help a patient, especially a child, to speak openly. Another emerging use case is with tracking of blood glucose readings for diabetic patients. To accomplish this, a text message would be sent to a patient reminding them that it is time to take their glucose reading. This message would then guide the patient to a web-based chat interface (using NVIDIA Riva and Rasa as text processors and response generators) to walk a patient through an interactive questionnaire to vital statistics and analyze parameters over time. Technology Overview There are two applications from by NVIDIA and one from by Rasa Technologies, Inc. that are in use to make this pipeline work. The first being NVIDIA Audio2Face. Audio2Face creates natural facial movement with the help of AI based on either live or recorded audio input (Fig. 1). This application come with three out of the box avatars/face models. However, if needed, an avatar mesh can be created using standard modelling tools (such as Blender or Unreal Engine) and the imported. Once setup, Audio2Face can interact with Riva (described below) to add a visual […]

Contributor

This project leverages recent advancements in conversational artificial intelligence (AI), speech-to-text, natural language understanding (NLU)[1], and finite-state machines to automate protocols, specifically in research settings. This application must be generalized, fully customizable, and irrespective of any research study. These parameters allow new research protocols to be created quickly once envisioned. With this in mind, I present SmartState, a fully-customizable, state-driven protocol manager combined with supporting AI components to autonomously manage user data and intelligently determine the intention of users through chat and end device interactions to drive protocols. [1] T. Bocklisch, J. Faulkner, N. Pawlowski, and A. Nichol, Rasa: Open source language understanding and dialogue management, 2017. DOI: 10.48550/ARXIV.1712.05181. [Online]. Available: https://arxiv.org/abs/1712.05181. Code: GitHub arXiv Link: SmartState: A Protocol-Driven Human Interface, Submitted to AMIA 2023 Annual Symposium

Contributor

A project in collaboration with the Healthier Futures Lab (HealthFuL) Cigarette smoking remains the number one cause of preventable morbidity and mortality in the U.S.. developing novel, efficacious smoking cessation interventions that are accessible, scalable, and sustainable to smokers, especially among vulnerable populations (e.g., low SES), will make a significant public health impact. One of the most powerful, evidence-based behavioral interventions for smoking cessation is contingency management. Contingency management interventions have recently become more accessible by being delivered remotely over the Internet and via mobile devices, like the RAM App. Contingency management is premised upon the provision of a desirable outcome (a.k.a. incentive), which has historically been money for ease of use, but could be applied to any desired commodity. Because video games are a rapidly evolving industry, we determined it would be more impactful to develop a method for managing incentives that will not be vulnerable to changes in social trends or advances in technology. We determined that making access to an individual’s own phone contingent upon a target behavior would provide the user with a personalized experience that was not subject to external market changes. In other words, as long as the individual desires access to content on his or her phone, we can leverage that desire to help motivate behavior change. Using RAM App as a basis, we are developing Re-Connect, a personalized, smartphone-based contingency management intervention that requires individuals to provide objective evidence of smoking abstinence in order to gain access to some of their most […]

Contributor

For patients visiting a large hospital system for the first time it can be overwhelming to figure out where you need to go, or to even find someone who can point you in the right direction. Employees in the hospital have similar struggles when moving samples from one building to another. We’re exploring solutions to these problems using robotics and artificial intelligence. Microsoft’s AirSim gives a convenient framework for modeling drones and vehicles in a digital scene that can be controlled programmatically. We’ve worked out a proof-of-concept for running a simulation on a remote server and controlling multiple drones in the simulation using an OpenAI Gym environment wrapper. This gives us an easy way to extract observations to train an AI agent using a reinforcement learning model. For the drones there is a package called PX4 which can be used to chart routes between locations in the simulator.  We’re currently working on creating a 3D model of UK’s campus from LiDAR data, and a model of the hospital system from floorplans using Blender and Unreal Engine. Link to the Repo

Contributor

Community Asset Registry for the Empowerment of Kentucky (CARE-KY) provides a repository of free or reduced-cost programs that can help to meet residents’ needs with things like food, housing, transportation, utilities, family and community support, and personal safety. CARE-KY was developed in collaboration with the Kentucky Consortium for Accountable Health Communities project. We connect residents of all 120 counties in Kentucky to the community resources they need. CARE-KY is multi-platform application is available as web app, and available on the iOS and Android app stores. Currently the platform is being redesigning to provide: Clear, concise Mobile-friendly Streamlined search Space for featuring important resources. In addition, the goal is to improve and streamline the search feature: Concise search parameters Viewable on a map Easy paging

Contributor

  A project in collaboration with the Healthier Futures Lab (HealthFuL) Contingency management is a highly efficacious treatment that can reduce drug and alcohol use. In contingency management treatments, tangible rewards (e.g., money, privileges, or prizes) are provided contingent upon verified abstinence from the target substance. Among adults with alcohol use disorder, delayed outcomes have relatively little control over behavior. Therefore, immediately available rewards such as those arranged in contingency management interventions are much more likely to promote positive behavior change than the health or social gains associated with long durations of abstinence. The RAM is a Android application that currently notifies participants when a breathalyzer reading is due, auto connects to the participant’s BACtrackbreathalyzer, guides the participant through the breathalyzer submission process with on-screen prompts, takes three pictures of the participant’s face during submission and submits the breathalyzer reading and facial pictures to a secure server. We are currently working on improving the App’s offline capabilities, adding support for new prize-based incentive protocols and introducing a CO monitor for to help monitor smoking amongst participants. In addition, the goal is to have this application work as cross-platform application that can also work on iOS devices.