OPIM News


The Heart Rate Hat

Wearable biometric technology is currently revolutionizing healthcare and consumer electronics. Internet-enabled pacemakers, smart watches with heart rate sensors and all manner of medical equipment now merge health with convenience. This is all a step in the right direction, but not quite the end goal. In fact, it seems that we’re past the point where technology can fix all the problems we’ve created. If this is the case then, at the very least, I think we can have a bit of fun while we’re still alive. Let us not mince words: We’re headed straight for a global environmental collapse, and I propose we go out in style.

For this reason, I’ve designed the Heart Rate Hat. This headwear is simultaneously a fashion statement, a demonstration of the capabilities of biometric sensors and the first step to the creation of a fully functioning cyborg. The Heart Rate Hat is exactly what I need to tie together my vision for a cyberpunk dystopia. Combined with other gear from the lab, such as our EEG headset and our electromyography equipment, we could raise some very interesting questions about what kinds of biometric data we actually want our technology to collect. We could also explore the consequences of being able to quantify human emotion and the use of such data to make predictions. What once was the realm of science fiction, we are now turning into reality.

The design of the device is relatively simple. I used the FLORA (a wearable, Arduino-compatible microcontroller) as the brain of the device, a Pulse Sensor to read heart rate data from the user’s ear lobe and several NeoPixel LED sequins for output. The speed at which the LEDs blink is determined by heart rate and the color is determined by heart rate variance. In the process of designing this, I learned how to work with conductive thread, how to program addressable LEDs and how to read and interpret heart rate and heart variance data from a sensor. As these are all important skills for wearable electronics prototyping, a similar project may be viable as an introduction to wearable product design. Going forward, I would like to explore other options for visualizing output from biometric sensors. I may work on outputting data from such a device to a computer via Bluetooth, incorporate additional sensors into the design and log the data to be used for analysis.

Written By: Eli Udler


K-Cup Holder for the UConn Writing Center

One of the most amazing things about 3D printing is the speed at which an idea becomes a design. With the growing prevalence of this technology, the time between thinking up an object that you would like to exist and seeing it constructed continues to decrease. The thought of turning something I dreamed up into a reality was my primary inspiration for this project: a sculpture of the UConn Writing Center logo that doubles up as a K-Cup holder.

I was excited to find out that a Writing Center tutor was kind enough to donate a Keurig to the office, putting lifegiving caffeine in the hands of tutors without the cost of running down to Bookworms Cafe. Alas, it was disturbing to see that the K-Cups used by the machine were being stored in a small basket. Now, I’m not the Queen of England or anything, but I have my limits. The toll on my mental health taken by watching the cups lazily thrown into a pile in the woven container was enough to force me to take action. With less than half an hour of active work, I was able to turn the Writing Center logo, a stylized “W”, into a three-dimensional model complete with holes designed to hold K-Cups.

But there’s another reason I decided to turn my strange idea into a reality: I wanted to highlight the range of resources offered on campus to UConn students. The OPIM Innovate space and the Writing
Center aren’t so different, really. While the Writing Center can assist students with their writing in a variety of disciplines, Innovate provides a range of tech kits that teach students about emerging
technologies. Both are spaces outside the classroom where students can learn relevant skills, regardless of their majors. Most importantly, perhaps, both were kind enough to hire me.

The print currently resides in the Writing Center office where tutors can sit down, enjoy a cup of freshly brewed coffee from a large sculpture of a W and savor the bold taste of interdisciplinary collaboration.

Written by: Eli Udler

 


AI Recognition

Artificial Intelligence is an interesting field that has become more and more integral to our daily lives. Its applications can be seen from facial recognition, recommendation systems, and automated client services. Such tasks make life a lot simpler, but are quite complex in and of themselves. Nonetheless, these tasks rely on machine learning, which is how computers develop ways to recognize patterns. These patterns require loads of data however so that computers can be accurate. Luckily in this day and age, there is a variety of data to train from and a variety of problems to tackle.

For this project, I wanted to use AI to show how it can integrate with other innovative technologies in the lab. I went with object detection because its applications in engineering rely on micro-controllers, data analytics, and the internet of things. For example, a self driving car needs to be able to tell where the road is or if a person is up ahead.  A task such as this simply wouldn’t function well without each component. Looking into the future, we will need more innovative ways to identify things whether it be for a car or for security surveillance.

 

When I first started working in the Innovate Lab, I saw the LCD plate in one of the cabinet drawers and wanted to know how it worked. I was fascinated with its potential and decided to focus my project around the plate. The project itself was to have a Raspberry Pi recognize an object and output the label onto an LCD plate.  The first aspect, recognizing objects, wasn’t too hard to create. Online there are multiple models that are already trained for public use. These pre-trained models tell the computer the algorithm to recognize a certain object. The challenge came with the installation of all the programs required to use the model and to run the script. Afterwards, outputting the text onto the plate simply required me to wire the plate to the pi. The need for the plate was to show the results otherwise they’d have to plug in a monitor, but other alternatives could have been used as well. For example, the results could have been sent online to a web application or they could have been stored in a file on the pi. So far the results haven’t been always accurate, but that just leaves room for improvement. I am hoping that soon I will be able to run the detection script on data that has been streamed to the pi. Overall, I gained a better understanding of the applications of AI and engineering. This was only one of the many capabilities of AI and there is still so much more to try.

By: Robert McClardy


Applications of VR in Mind Body Research

Mind Body Health Group:

The UConn Mind Body Health Group will partner with the OPIM lab to use virtual reality and other related technology as potential interventions for various psychological and physical health disorders.  A presentation will be scheduled for this academic year open to those interested in mind body health research and applications of innovative technology.

Melissa Bray, Phd – Professor of School Psychology

Dissertation Research:

The OPIM lab is partnering with Johanna deLeyer-Tiarks, a School Psychology doctoral student under the advisement of Dr. Melissa Bray on her dissertation research. This study will investigate the usefulness of Virtual Reality technology as a delivery model for Self-Modeling, a practice rooted in Bandura’s theory of observational learning. It is theorized that VR technology will facilitate an immersive Self-Modeling experience which may promote stronger gains than traditional Self-Modeling approaches. A research project is being developed to investigate this theory through the applications of Virtual Reality Self-Modeling to individuals who stutter. The research will culminate in a VR intervention to treat clinical levels of chronic stuttering.

This project is in its early stages of development. IRB materials are not yet approved.

Johanna deLeyer-Tiarks,  – Phd Student, School Psychology


Designing the School of Business

I have been using CAD to make different models and designs since I was in high school.  It’s so satisfying to make different parts in a program that you can then bring to life with 3D Printing. In the OPIM Innovation Space, several 3D printers have some really special capabilities and I wanted to put my skills as a designer, and the abilities of the printer, to the test.

The Makerbot Z18 is easily one of the largest consumer grade printers available.  It can print within a 18 by 12 by 12 inch build volume. That’s one and a half square feet!  I challenged myself to build a model of UConn’s School of Business and then 3D print it to the largest size possible.

I started on Google maps and traced out the School of Business onto a template. Then I walked outside the school and took pictures of its notable features.  It took several days for me to capture the details of the building, such as cutting out windows and creating the overhanging roof, in order to make the building an accurate model.  I even hollowed out the model so that it could accomodate a microcontroller or LEDs if we wanted to use some IoT technology to upgrade the project.

Printing the behemoth of a project was a challenge.  The entire design printed on its side so that it could use nearly all of the Z18’s build volume, and even at full efficiency it was estimated to take 114 hours to print. I contemplated cutting it into two pieces and printing them separately, but it would be so much cooler to use the full size of the printer. It took several tries before I was able to print the School of Business in one shot. After several broken filament rolls and failed prints, the entire school was finished.

This project gave me great insight into the manufacturing problems faced by using 3D printing technology to produce exceedingly large parts. This model used about 3 pounds of filament and really pushed the limits of the technology.  A miniature School of Business was not only a great showcase for the OPIM Department and for OPIM Innovate, but it was a testament to the future of technology. Maybe in the future buildings will actually be 3D printed. It will be super exciting to see how this technology, and the CAD softwares that compliment it, evolve moving forward.

By: Evan Gentile, Senior, MEM Major

 


Learning Facial Recognition

Nowadays programs made by companies like Apple and Facebook are capable of making software that can unlock your phone or tag you in a photo automatically using facial recognition. Facial recognition first attracted me because of the incredible ability it had to translate a few lines of code into a recognition that mimics the human eye. How can this technology be helpful in the world of business? My first thought was in cyber security. Although I merely had one semester, I wanted to be able to use video to recognize faces. My first attempt used a raspberry pi as the central controller. Raspberry pis are portable, affordable, and familiar because they are used throughout the OPIM Innovation Space.  There is also a camera attachment which I thought made it perfect for my project. After realizing that the program I attempted to develop used too much memory, I moved into using Python, a programming language, on my own laptop.

Installing Python was a challenge in itself, and I was writing code in a language I had never used before. Eventually I was able to get basic scripts to execute on my computer, and then I could install and connect Python to the OpenCV library.  OpenCV is a database that used stored facial recognition data to quickly run face recognition programs in Python. The setup was really difficult; if a single file location was not exactly specified, or had moved by accident, then the entire library of openCV wouldn’t be accessible from Python. It was these programs that taught me how to take live video feed and greyscale it, so the computer wasn’t overloaded with information. Then I identified uniform perimeters of nearby objects in the video, traced them with the program using a rectangular, and used database files developed by INTEL to recognize if those objects were the face of a person.  The final product was really amazing. The Python script ran by using OpenCV to reference a large .xml file which held the data on how to identify a face. The program refreshed real time, and as I looked into my live feed camera I watched the rectangular box follow around the contour of my face. I was amazed by how much effort and such complex code it took to carry out a seemingly simple task. Based on my experience with the project, I doubt that Skynet or any other sentient robots are going to be smart enough to take over the world anytime soon. In the meantime, I can’t wait to see how computer vision technology will make the world a better place to live in.

By: Evan Gentile, Senior MEM Major

 


Innovate Taking Steps Towards Splunk

After taking OPIM 3222 I learned about this really cool big data platform called Splunk. I learned that the big selling point of Splunk is that it can take any machine data and restructure it so that it is actually useful. So after I got a Fitbit Charge HR my first thought was, “I wonder what we can find if we Splunk this.” I worked with Professor Ryan O’Connor on this project, and after several hiccups, we finally got a working add-on. Back when we first started this project we found a “Fitbit Add-On” in Splunkbase that we just had to install and then we would be ready to go. After spending a lot of time trying to get this add-on set up we learned that it was a bit outdated and had some bugs that were making it quite difficult to use. After a while this project got pushed off to the side as we worked on other IoT related projects in the OPIM Gladstein Lab.

By the time another spark of inspiration came along, the Splunk add-on was gone because of its age and bugs. Since the add-on was gone we had to take matters into our own hands. Professor O’Connor and I split the work so that I would work on using the Fitbit API to pull data and he would then work on putting it into Splunk. I wrote Python scripts to collect data on steps, sleep, and heart rate levels. We then found that the Fitbit API required OAuth2 authentication every few hours to be able to continuously pull data. Professor O’Connor already tackled a similar issue when making his Nest add-on for Splunk. He used a Lambda function from Amazon Web Services to process this OAuth2 request. We decided to use the same function for the Fitbit, but the major difference is that the function is called every few hours. Professor O’Connor then made a simple interface for users to get their API tokens and setup the add-on in minutes. We then took a look at all of the data we had and decided the best thing that we could make was a step challenge. We invited our friends and family to track their steps with us and create a dashboard to summarize activity levels, create an all-time leaderboard, and visualize steps taken over the course of the day. However, this app only scratches the surface of what can be done with Fitbit data. The possibilities are endless from both a business and research perspective. We have already gotten a lot of support from the Splunk developer community and we are excited to see what people can do with this add-on.

By: Tyler LaurettiSenior MIS Major


Introducing Alexa to Splunk

Before the introduction of Apple’s “Siri” in 2010, Artificial Intelligence voice assistants were no more than science fiction. Fast forward to today, and you will find them everywhere from in your phone helping you navigate your contacts and calendar, to in your home helping you around the house. Each smart assistant has its pros and cons, and everyone has their favorite assistant. Over the last few years I have really enjoyed working with Amazon’s Alexa smart assistant. I began working with Alexa during my summer internship at Travelers in 2016. I attended a “build night” after work where we learned how to start developing with Amazon Web Services and the Alexa platform. Since then, I’ve developed six different skills and received Amazon hoodies, t-shirts, and Echo Dots for my work.


So I mentioned “skills”, but I didn’t really explain them. An Alexa “skill” is like an app on your phone. These skills use the natural language processing and other capabilities of the Amazon “Lex” platform to do whatever you can think of. Some of the skills I have made in the past include a guide to West Hartford, a UConn fact generator, and a quiz to determine if you should order pizza or wings. However, while working in the OPIM Innovate lab I have found some other uses for Alexa that go beyond novelties. The first was using Alexa to query our Splunk instance. Lucky for us, the Splunk developer community has already done a lot of the leg work for us. The “Talk to Splunk with Amazon Alexa” Splunk add-on handles most of the networking between the Alexa and your Splunk instance. In order for Alexa Splunk to securely communicate we had to set up a private key, a self-signed certificate, and a Java keystore. After doing a few basic configuration steps on the Splunk side, you can start creating your Splunk Alexa Skill. This skill will be configured to talk to your splunk instance, but it is up to you to determine what queries to run. You can create “Intents” that will take an english phrase and convert that to a splunk query that you write. However, you can also use this framework to make Alexa do things like play sounds or say anything you want. For example, I used the Splunk skill we created to do an “interview” with Alexa about Bank of America’s business strategy for my management class. Below you can find links to the Alexa Add-On for Splunk as well as a video of that “interview”.

By: Tyler LaurettiSenior MIS Major

 

Talk to Splunk with Amazon Alexa: https://splunkbase.splunk.com/app/3298/

Alexa Talks Strategy

https://www.youtube.com/watch?v=UYaV8ybYV04