Hi! I'm Aime Munezero

I hold a Master's degree in Information Technology from Carnegie Mellon University, and am currently pursuing another second Master's degree in Business Analytics from Emory Goizueta Business School. I am currently working with Realtor.com as a Machine Learning Consultant on a predictive autocomplete project.

I enjoy meeting new people. I absolutely love building solutions that address business challenges and create impact, especially those that can contribute to making life more enjoyable, whether in big or small ways. I do this by leveraging my data science and data engineering skills.

In my free time, I enjoy playing and watching soccer, especially while rooting for my favorite team. I also like playing video games, engaging in outdoor activities, hanging out with friends and family, and listening to music.

Work Experience


Machine Learning Consultant

During my time at Emory University, I had an opportunity to work on industry business projects, addressing real business challenges. I was privileged to join a team collaborating with Realtor.com to enhance their autocomplete feature, aiming to personalize it further for users on their platform to increase engagement. This project is still underway. The technical stack we are using include: Python, SQL, and cloud services (AWS). We are leveraging the above tools to improve the user experience on Realtor.com.


Machine Learning Lead

I have always been intrigued by the potential of cutting-edge AI technologies. After earning my Master's degree from Carnegie Mellon University, I had the opportunity to join Awesomity Lab as a Machine Learning Lead, a company renowned for its exceptional software services in Rwanda.
During my time there, I was fortunate enough to lead a project focused on implementing a computer vision algorithm. Our goal was to predict the weight of lambs using 3D imagery. Impressively, we achieved a prediction error rate of just 5%, which significantly reduced the operational costs for our client's livestock management.
The project served as a pivotal prototype, showcasing how AI can revolutionize the farming industry. It was not just about the technological achievement but also about demonstrating AI's capacity to enhance efficiency and sustainability in agriculture, setting a precedent for future innovations.

JP Morgan

AI Researcher

During my time at Carnegie Mellon University in September 2022, I had the opportunity to work on a project involving low-resource languages spoken in Central Africa, which are crucial for the development of Conversational AI. At that time, it was challenging to find Large Language Models capable of interacting with these lesser-known languages. Existing solutions, including those from leading companies like Google, were not good, highlighting the need for extensive research to advance the field and contribute to the community.
My teammates from CMU, along with a group of researchers from JP Morgan, focused on the preliminary task of Natural Language Understanding. This task is a cornerstone of Conversational AI, involving the prediction of a user's query intent and its corresponding slots.
In our study, we managed to compile a dataset for the community that includes human translations for three languages: Swahili, Kinyarwanda, and Luganda. We conducted baseline experiments using pre-trained language models, including mBERT and XLM-Roberta, which showed promising results. We shared our findings and methodologies in a paper presented at a conference, aiming to inspire further research within the community.

Carnegie Mellon University

Research Associate

After my graduation from CMU, I continued working with Professor Barry Rawn on a project for the Rwanda Energy Group, primarily focusing on analyzing data silos within the company. The project was exciting because it allowed me to learn how to create interesting visualizations using new JavaScript frameworks such as D3.js, as well as Python, to clearly demonstrate complex data insights.


REG (Rwanda Energy Group)

In the summer of 2022, I had the opportunity to work as a Data Engineer at REG (Rwanda Energy Group), where I contributed to an ambitious project aimed at digitizing Rwanda's energy sector. The project's goal was to transition from traditional, manual operations to a modern, digital framework, to enhance efficiency and reliability. The shift was critical for REG, as it was a way to streamline its operations, reduce losses, and improve revenues through better energy distribution
As an intern, my role involved collaborating across various departments to conceptualize a unified data acquisition system. This system was designed to centralize data control, ensuring a comprehensive overview of the energy flow and significantly reducing energy losses by allowing for real-time monitoring and management.
To achieve this, I and my colleague Deji Adebayo designed an Entity Relationship Diagrams (ERD) for the new data acquisition system. These ERDs were pivotal in outlining how data would be structured and interrelated across the system, enabling us to track and minimize energy losses. Our goal was to achieve a notable reduction in energy losses across six different functional departments, which would not only improve efficiency but also enhance the sustainability of the energy sector in Rwanda.


Truist Bank Modelling and Business Challenge

I participated in a modelling competition challenge organized by Truist Bank where we focused on implementing an explainable classification algorithm. By leveraging decision trees and boosting methods, our aim was to accurately predict whether clients would enroll for a Certificate of Deposit. This sophisticated approach allowed us to achieve an impressive Area Under the Curve (AUC) score of 0.8, securing second place in the competition due to our high predictive accuracy and innovative use of explainable AI.
Additionally, I participated in another business challenge where our objective was to develop Natural Language Processing (NLP) solutions aimed at enhancing trust in technological services. We analyzed data from customer feedback and social media reviews, using them as foundational elements to understand security insights. This innovative approach not only facilitated a deeper understanding of customer concerns regarding bank security but also enabled us to come up with strategies to effectively address these issues. Our project stood out for its creative application of NLP in enhancing both customer trust and bank security, ultimately securing us second place in the challenge for our forward-thinking and impactful solution.
Thanks to the hard work of my awesome team , we were able to build creative and innovative solutions.

Data Engineering Project

Strategic Casting Analytics

As a machine learning engineer, I not only build models but also orchestrate data pipelines, ensuring that the data feeding into these models is clean, well-structured, and accessible.
My colleagues and I have worked on a data engineering project aimed at optimizing the casting process in the film industry using data-driven insights. We sought to understand the dynamics of actor-director collaborations and how these relationships influence the success of film projects. The project aimed to identify the best actors for movie roles by analyzing their historical collaborations with directors and evaluating the strength of these connections. By leveraging big data, the question extended to exploring how the depth and quality of these industry relationships could be systematically visualized and quantified to inform casting decisions.
In the project, we used the IMDb dataset and did an ETL process with Sqoop, Hive, and Python to create a network algorithm that maps actor-director connections and evaluates their impact on film success. Our approach improves casting accuracy, fosters industry collaborations, and optimizes team composition which strongly benefits the film industry.

Cutting Edge Projects


During my free time, I love to play soccer and basketball, and watching sports matches is one of my favorite pastimes. In the week leading up to the Super Bowl, as everyone was eager to know information about the players, their stats, and their odds in the game, I was inspired to come up with an idea to fine-tune a language model that is open-source using current data from NFL website containing all the necessary info and stats about all league players in the season. I explored Hugging Face, a leading platform for open-source models, then decided to use the Llama-2-7b-chat model by meta and used the official 2023 National Football League Record Fact Book as my data source. I developed a simple app demo that allows users to ask about any player's stats and information through user prompts. The project not only fulfilled my interests but also yielded impressive results and received excellent reviews.