When the pandemic hit, we all were affected in different ways.  Students who remained on campus were forced into a more isolated existence.  Preschoolers at the Child Development Lab were temporarily kept from learning together.  And yet, green shoots emerge from this period of isolation.  The Grow2Give program is an example of that.

Preschool students benefit from exposure to plants and the growth cycle.  Sustainable education for pre-school children can also teach empathy via care for plants.  Studies have shown that care for plants can increase mental health and well-being.

Inspired by the sustainability curriculum, “Nature Explore” offered by the Child Development Lab, Jon Moore, coordinator of OPIM Innovate, envisioned the Grow2Give program as a means to bring people together.  Our “littlest Huskies” would grow plants to be given away as gifts to undergraduate students, and ideally, we all could learn about how caring for plants could benefit our mental health.

Once supplies were dropped off, the teachers at the CDL got the process started by helping the children individually paint the 2” pots that would hold each plant. Although a fun activity, the importance of creating empathy and the idea that the children were making these to give to others was focused on.

Meanwhile, advice on what to grow came from students in the Horticulture Club. Since many students have cats in the dorm (or at home) an

d because of its fast growing nature, we chose “catgrass” as one of the plants, but also picked lemongrass and a variety of mints for the first planting.  Rosemary and lavender were also added to the list but with the knowledge that these had a much longer growing period.

With pots painted and seed, soil, and grow stations set up, again led by their teachers, the kids were ready to plant. Under the leadership of head teachers Debbie Muro and Kelly Aston, (most) of the plants germinated and have been growing just in time for distribution. Although this Spring was the first pilot for the logistic side of the project, we plan to continue creating a network of stakeholders who want to see this project grow. In the past few months, Jon Moore has been meeting with other groups at the the University to coordinate distribution and hopefully, to enable the study and documentation of the beneficial effects on mental health that can be achieved by caring for plants.  Colleen Atkinson and Karen McComb from Student Health and Wellness have been involved, along with Melissa Bray and Emily Winter from the Ed Psych department.  Amy Crim from Residential Life Education has offered ideas on how we might distribute the plants that are currently being grown.

As Spring arrives, our plants are coming soon to a dorm near you!  If you receive a plant and want some tips on your specific plants care, check out the embedded links below.  If you would like to know more about the program or would like to be involved, please contact Jon Moore at the OPIM Innovate Lab to learn more. – jonathan.a.moore@uconn.edu

-Nick Brown, graduate assistant and MBA student ‘22


Farmbot: Robotics, Analytics, and Sustainability

Over the past year, we have been presented with a unique set of challenges. Living, and working, from home has been a challenge for us all, but it has most effectively stunted research projects. However, this was the perfect opportunity to test a machine meant for such a scenario. Farmbot gives us a sneak peek into the future of farming; fully automated and sustainable. These are imperative steps towards increasing the availability of fresh produce, cutting the effects of climate change, plastic packaging, pesticide use, and carbon emissions that continue to pollute the earth and the food we eat. Even though setting up FarmBot itself proved an arduous task, the final result provides a sustainable farming method allowing for the automated data collection and maintenance of plants.

FarmBot is an autonomous open-source computer numerical control (CNC) farming robot that prioritizes sustainability and optimizes modern farming techniques. Using computer numerical control, FarmBot can accurately and repeatedly conduct experiments with no human input and therefore, very little error. We can write sequences, plan regimens and events to collect data 24/7 in addition to monitoring the system remotely. This allows us to plan as many plants, crops, inputs, and operations as needed. Reducing cost and increasing sustainable farming is a priority of FarmBot.

The FarmBot Genesis features a gantry mounted on tracks attached to the sides of a raised garden bed. The tracks create a great level of precision because the garden bed is represented as a grid in which plant locations and tools can have specific coordinates, therefore allowing endless customization of your garden. The gantry bridges both sides of the track and uses a belt and pulley drive system and V-wheels to move along the X-axis (the tracks) and the Y-axis (the gantry). The cross-slide that controls the Y-axis also utilizes a leadscrew to move the Z-axis extrusion, allowing for up and down movement as well.

Movement is powered by four NEMA 17 stepper motors including rotary encoders to monitor relative motion. The rotary encoders are monitored by a dedicated processor in the custom Farmduino electronics board, which utilizes a Raspberry Pi 3 as the “brain”. The Farmduino v1.5 system has several useful features including built-in stall detection for the motors. Stainless steel and aluminum hardware makes the machine resistant to corrosion, making this system safe for long-term use outdoors.

The FarmBot Genesis model also includes several tools made of UV-resistant ABS plastic that are interchanged using the Universal Tool Mount (UTM) on the Z-axis extrusion. The UTM utilizes 12 electrical connections, three liquid/gas lines, and magnetic coupling to mount and change tools with ease, allowing for automation at nearly every step of the planting and growing process. Sowing seeds without any human intervention is possible with the vacuum-powered seed injector tool, which is compatible with three different-sized needles to accommodate seeds of varying sizes. Once planting has been completed, regimens can be set up to water plants on a schedule using the watering nozzle. The attachment is coupled with a solenoid valve to control the flow of water, ensuring that each plant receives as much moisture as they need. The soil sensor tool takes the automation of watering a step further by detecting the moisture of the soil and using the collected data to modify the amount of water dispensed as needed. A customizable weeding tool uses spikes to push young weeds into the soil before they become an issue. FarmBot uses a built-in waterproof camera to detect weeds and take photos of plants to track growth. All of these tools and features create a completely customizable farming experience without the worry of human error.

FarmBot operates using a 100% open-source operating system and web app. In the web app, we can easily control, configure, create customizable sequences and routines all with a drag-and-drop farm designer and block code format. From here we are able to receive all information regarding the positioning, tools, and plants within the garden bed. FarmBot’s Raspberry Pi uses FarmBot OS to communicate and stay synchronized with the web app allowing it to download and execute scheduled events, be controlled in real-time, and upload logs and sensor data.

We faced many ups and downs during the hardware and software phases of FarmBot. From vague reference docs, manufacturing defects, hardware failures, and network security problems, this was an in-depth and at times very frustrating project. However, we didn’t want an easy project. The challenges we faced when building FarmBot, as annoying it was to debug at the time, helped us gain a great understanding of how this machine works. Through all the blood, sweat, and tears (literally), we learned more from this project than we ever could have imagined. From woodworking to circuity to programming and botany, we tackled a wide range of issues. But that’s what made this project so worth it. Many times we had to resort to out-of-the-box thinking to resolve issues with some of the limited components we had. Working as a team also allowed us to bounce ideas off one another while each bringing our own unique talents to the team. Nicolas has a background in Computer Science which helped with programming FarmBot as well as resolving software and networking issues that occurred. Hannah’s major is Biological Sciences and she has experience with gardening, which helped in the plant science aspect of the project on top of overall planning and construction. As an electrical engineering major, Nicholas helped with the building process, wiring and his input was critical in resolving issues of such an advanced electromechanical system. While we recognize FarmBot’s shortcomings, it was an incredible learning experience and an imperative building block towards a more sustainable and eco-friendly future.

Moving forward, we plan on modifying FarmBot to include a webcam, rain barrel, and solar panels. A webcam will give us a live feed of the FarmBot to allow live remote monitoring as well as enable us to take photos for timelapse photography. Rain barrels can be used to collect rainwater which can be recycled and act as FarmBots water source, further increasing its sustainability. Solar panels will provide a dedicated, off-grid solar energy system helping further reduce the cost and carbon emissions associated with running FarmBot. We also currently have an MIS capstone team developing ways to pull real-time data for future analysis and dataset usage in other academic settings.

Be on the lookout for future FarmBot updates and be free to reach out to opiminnovate@uconn.edu for more information.

By: Nicholas Satta, Hannah Meikle, Nicolas Michel


The Heart Rate Hat

Wearable biometric technology is currently revolutionizing healthcare and consumer electronics. Internet-enabled pacemakers, smart watches with heart rate sensors and all manner of medical equipment now merge health with convenience. This is all a step in the right direction, but not quite the end goal. In fact, it seems that we’re past the point where technology can fix all the problems we’ve created. If this is the case then, at the very least, I think we can have a bit of fun while we’re still alive. Let us not mince words: We’re headed straight for a global environmental collapse, and I propose we go out in style.

For this reason, I’ve designed the Heart Rate Hat. This headwear is simultaneously a fashion statement, a demonstration of the capabilities of biometric sensors and the first step to the creation of a fully functioning cyborg. The Heart Rate Hat is exactly what I need to tie together my vision for a cyberpunk dystopia. Combined with other gear from the lab, such as our EEG headset and our electromyography equipment, we could raise some very interesting questions about what kinds of biometric data we actually want our technology to collect. We could also explore the consequences of being able to quantify human emotion and the use of such data to make predictions. What once was the realm of science fiction, we are now turning into reality.

The design of the device is relatively simple. I used the FLORA (a wearable, Arduino-compatible microcontroller) as the brain of the device, a Pulse Sensor to read heart rate data from the user’s ear lobe and several NeoPixel LED sequins for output. The speed at which the LEDs blink is determined by heart rate and the color is determined by heart rate variance. In the process of designing this, I learned how to work with conductive thread, how to program addressable LEDs and how to read and interpret heart rate and heart variance data from a sensor. As these are all important skills for wearable electronics prototyping, a similar project may be viable as an introduction to wearable product design. Going forward, I would like to explore other options for visualizing output from biometric sensors. I may work on outputting data from such a device to a computer via Bluetooth, incorporate additional sensors into the design and log the data to be used for analysis.

Written By: Eli Udler

K-Cup Holder for the UConn Writing Center

One of the most amazing things about 3D printing is the speed at which an idea becomes a design. With the growing prevalence of this technology, the time between thinking up an object that you would like to exist and seeing it constructed continues to decrease. The thought of turning something I dreamed up into a reality was my primary inspiration for this project: a sculpture of the UConn Writing Center logo that doubles up as a K-Cup holder.

I was excited to find out that a Writing Center tutor was kind enough to donate a Keurig to the office, putting lifegiving caffeine in the hands of tutors without the cost of running down to Bookworms Cafe. Alas, it was disturbing to see that the K-Cups used by the machine were being stored in a small basket. Now, I’m not the Queen of England or anything, but I have my limits. The toll on my mental health taken by watching the cups lazily thrown into a pile in the woven container was enough to force me to take action. With less than half an hour of active work, I was able to turn the Writing Center logo, a stylized “W”, into a three-dimensional model complete with holes designed to hold K-Cups.

But there’s another reason I decided to turn my strange idea into a reality: I wanted to highlight the range of resources offered on campus to UConn students. The OPIM Innovate space and the Writing
Center aren’t so different, really. While the Writing Center can assist students with their writing in a variety of disciplines, Innovate provides a range of tech kits that teach students about emerging
technologies. Both are spaces outside the classroom where students can learn relevant skills, regardless of their majors. Most importantly, perhaps, both were kind enough to hire me.

The print currently resides in the Writing Center office where tutors can sit down, enjoy a cup of freshly brewed coffee from a large sculpture of a W and savor the bold taste of interdisciplinary collaboration.

Written by: Eli Udler


AI Recognition

Artificial Intelligence is an interesting field that has become more and more integral to our daily lives. Its applications can be seen from facial recognition, recommendation systems, and automated client services. Such tasks make life a lot simpler, but are quite complex in and of themselves. Nonetheless, these tasks rely on machine learning, which is how computers develop ways to recognize patterns. These patterns require loads of data however so that computers can be accurate. Luckily in this day and age, there is a variety of data to train from and a variety of problems to tackle.

For this project, I wanted to use AI to show how it can integrate with other innovative technologies in the lab. I went with object detection because its applications in engineering rely on micro-controllers, data analytics, and the internet of things. For example, a self driving car needs to be able to tell where the road is or if a person is up ahead.  A task such as this simply wouldn’t function well without each component. Looking into the future, we will need more innovative ways to identify things whether it be for a car or for security surveillance.


When I first started working in the Innovate Lab, I saw the LCD plate in one of the cabinet drawers and wanted to know how it worked. I was fascinated with its potential and decided to focus my project around the plate. The project itself was to have a Raspberry Pi recognize an object and output the label onto an LCD plate.  The first aspect, recognizing objects, wasn’t too hard to create. Online there are multiple models that are already trained for public use. These pre-trained models tell the computer the algorithm to recognize a certain object. The challenge came with the installation of all the programs required to use the model and to run the script. Afterwards, outputting the text onto the plate simply required me to wire the plate to the pi. The need for the plate was to show the results otherwise they’d have to plug in a monitor, but other alternatives could have been used as well. For example, the results could have been sent online to a web application or they could have been stored in a file on the pi. So far the results haven’t been always accurate, but that just leaves room for improvement. I am hoping that soon I will be able to run the detection script on data that has been streamed to the pi. Overall, I gained a better understanding of the applications of AI and engineering. This was only one of the many capabilities of AI and there is still so much more to try.

By: Robert McClardy

Applications of VR in Mind Body Research

Mind Body Health Group:

The UConn Mind Body Health Group will partner with the OPIM lab to use virtual reality and other related technology as potential interventions for various psychological and physical health disorders.  A presentation will be scheduled for this academic year open to those interested in mind body health research and applications of innovative technology.

Melissa Bray, Phd – Professor of School Psychology

Dissertation Research:

The OPIM lab is partnering with Johanna deLeyer-Tiarks, a School Psychology doctoral student under the advisement of Dr. Melissa Bray on her dissertation research. This study will investigate the usefulness of Virtual Reality technology as a delivery model for Self-Modeling, a practice rooted in Bandura’s theory of observational learning. It is theorized that VR technology will facilitate an immersive Self-Modeling experience which may promote stronger gains than traditional Self-Modeling approaches. A research project is being developed to investigate this theory through the applications of Virtual Reality Self-Modeling to individuals who stutter. The research will culminate in a VR intervention to treat clinical levels of chronic stuttering.

This project is in its early stages of development. IRB materials are not yet approved.

Johanna deLeyer-Tiarks,  – Phd Student, School Psychology

Designing the School of Business

I have been using CAD to make different models and designs since I was in high school.  It’s so satisfying to make different parts in a program that you can then bring to life with 3D Printing. In the OPIM Innovation Space, several 3D printers have some really special capabilities and I wanted to put my skills as a designer, and the abilities of the printer, to the test.

The Makerbot Z18 is easily one of the largest consumer grade printers available.  It can print within a 18 by 12 by 12 inch build volume. That’s one and a half square feet!  I challenged myself to build a model of UConn’s School of Business and then 3D print it to the largest size possible.

I started on Google maps and traced out the School of Business onto a template. Then I walked outside the school and took pictures of its notable features.  It took several days for me to capture the details of the building, such as cutting out windows and creating the overhanging roof, in order to make the building an accurate model.  I even hollowed out the model so that it could accomodate a microcontroller or LEDs if we wanted to use some IoT technology to upgrade the project.

Printing the behemoth of a project was a challenge.  The entire design printed on its side so that it could use nearly all of the Z18’s build volume, and even at full efficiency it was estimated to take 114 hours to print. I contemplated cutting it into two pieces and printing them separately, but it would be so much cooler to use the full size of the printer. It took several tries before I was able to print the School of Business in one shot. After several broken filament rolls and failed prints, the entire school was finished.

This project gave me great insight into the manufacturing problems faced by using 3D printing technology to produce exceedingly large parts. This model used about 3 pounds of filament and really pushed the limits of the technology.  A miniature School of Business was not only a great showcase for the OPIM Department and for OPIM Innovate, but it was a testament to the future of technology. Maybe in the future buildings will actually be 3D printed. It will be super exciting to see how this technology, and the CAD softwares that compliment it, evolve moving forward.

By: Evan Gentile, Senior, MEM Major


Learning Facial Recognition

Nowadays programs made by companies like Apple and Facebook are capable of making software that can unlock your phone or tag you in a photo automatically using facial recognition. Facial recognition first attracted me because of the incredible ability it had to translate a few lines of code into a recognition that mimics the human eye. How can this technology be helpful in the world of business? My first thought was in cyber security. Although I merely had one semester, I wanted to be able to use video to recognize faces. My first attempt used a raspberry pi as the central controller. Raspberry pis are portable, affordable, and familiar because they are used throughout the OPIM Innovation Space.  There is also a camera attachment which I thought made it perfect for my project. After realizing that the program I attempted to develop used too much memory, I moved into using Python, a programming language, on my own laptop.

Installing Python was a challenge in itself, and I was writing code in a language I had never used before. Eventually I was able to get basic scripts to execute on my computer, and then I could install and connect Python to the OpenCV library.  OpenCV is a database that used stored facial recognition data to quickly run face recognition programs in Python. The setup was really difficult; if a single file location was not exactly specified, or had moved by accident, then the entire library of openCV wouldn’t be accessible from Python. It was these programs that taught me how to take live video feed and greyscale it, so the computer wasn’t overloaded with information. Then I identified uniform perimeters of nearby objects in the video, traced them with the program using a rectangular, and used database files developed by INTEL to recognize if those objects were the face of a person.  The final product was really amazing. The Python script ran by using OpenCV to reference a large .xml file which held the data on how to identify a face. The program refreshed real time, and as I looked into my live feed camera I watched the rectangular box follow around the contour of my face. I was amazed by how much effort and such complex code it took to carry out a seemingly simple task. Based on my experience with the project, I doubt that Skynet or any other sentient robots are going to be smart enough to take over the world anytime soon. In the meantime, I can’t wait to see how computer vision technology will make the world a better place to live in.

By: Evan Gentile, Senior MEM Major


Innovate Taking Steps Towards Splunk

After taking OPIM 3222 I learned about this really cool big data platform called Splunk. I learned that the big selling point of Splunk is that it can take any machine data and restructure it so that it is actually useful. So after I got a Fitbit Charge HR my first thought was, “I wonder what we can find if we Splunk this.” I worked with Professor Ryan O’Connor on this project, and after several hiccups, we finally got a working add-on. Back when we first started this project we found a “Fitbit Add-On” in Splunkbase that we just had to install and then we would be ready to go. After spending a lot of time trying to get this add-on set up we learned that it was a bit outdated and had some bugs that were making it quite difficult to use. After a while this project got pushed off to the side as we worked on other IoT related projects in the OPIM Gladstein Lab.

By the time another spark of inspiration came along, the Splunk add-on was gone because of its age and bugs. Since the add-on was gone we had to take matters into our own hands. Professor O’Connor and I split the work so that I would work on using the Fitbit API to pull data and he would then work on putting it into Splunk. I wrote Python scripts to collect data on steps, sleep, and heart rate levels. We then found that the Fitbit API required OAuth2 authentication every few hours to be able to continuously pull data. Professor O’Connor already tackled a similar issue when making his Nest add-on for Splunk. He used a Lambda function from Amazon Web Services to process this OAuth2 request. We decided to use the same function for the Fitbit, but the major difference is that the function is called every few hours. Professor O’Connor then made a simple interface for users to get their API tokens and setup the add-on in minutes. We then took a look at all of the data we had and decided the best thing that we could make was a step challenge. We invited our friends and family to track their steps with us and create a dashboard to summarize activity levels, create an all-time leaderboard, and visualize steps taken over the course of the day. However, this app only scratches the surface of what can be done with Fitbit data. The possibilities are endless from both a business and research perspective. We have already gotten a lot of support from the Splunk developer community and we are excited to see what people can do with this add-on.

By: Tyler LaurettiSenior MIS Major

Introducing Alexa to Splunk

Before the introduction of Apple’s “Siri” in 2010, Artificial Intelligence voice assistants were no more than science fiction. Fast forward to today, and you will find them everywhere from in your phone helping you navigate your contacts and calendar, to in your home helping you around the house. Each smart assistant has its pros and cons, and everyone has their favorite assistant. Over the last few years I have really enjoyed working with Amazon’s Alexa smart assistant. I began working with Alexa during my summer internship at Travelers in 2016. I attended a “build night” after work where we learned how to start developing with Amazon Web Services and the Alexa platform. Since then, I’ve developed six different skills and received Amazon hoodies, t-shirts, and Echo Dots for my work.

So I mentioned “skills”, but I didn’t really explain them. An Alexa “skill” is like an app on your phone. These skills use the natural language processing and other capabilities of the Amazon “Lex” platform to do whatever you can think of. Some of the skills I have made in the past include a guide to West Hartford, a UConn fact generator, and a quiz to determine if you should order pizza or wings. However, while working in the OPIM Innovate lab I have found some other uses for Alexa that go beyond novelties. The first was using Alexa to query our Splunk instance. Lucky for us, the Splunk developer community has already done a lot of the leg work for us. The “Talk to Splunk with Amazon Alexa” Splunk add-on handles most of the networking between the Alexa and your Splunk instance. In order for Alexa Splunk to securely communicate we had to set up a private key, a self-signed certificate, and a Java keystore. After doing a few basic configuration steps on the Splunk side, you can start creating your Splunk Alexa Skill. This skill will be configured to talk to your splunk instance, but it is up to you to determine what queries to run. You can create “Intents” that will take an english phrase and convert that to a splunk query that you write. However, you can also use this framework to make Alexa do things like play sounds or say anything you want. For example, I used the Splunk skill we created to do an “interview” with Alexa about Bank of America’s business strategy for my management class. Below you can find links to the Alexa Add-On for Splunk as well as a video of that “interview”.

By: Tyler LaurettiSenior MIS Major


Talk to Splunk with Amazon Alexa: https://splunkbase.splunk.com/app/3298/

Alexa Talks Strategy