Projects

Building a Novel NLP Based Recommendation Engine for Haikus

Code woven in light, Silent dance of electrons, Universe awake

Recommendation algorithms are widely recognized for their application in e-commerce and social media platforms, employing user info to curate personalized suggested items. To learn more about recommendation systems, I built my own model. The initial challenge was finding an appropriate dataset, and to streamline the process, I opted for simplicity in content, opting for short-form text that lent itself well to recommendations, minimizing the computational load associated with longer and more complex inputs.

After several days of exploration, I encountered the dual challenge of finding a dataset that mirrored real-world scenarios (with missing data and duplicates) and possessed a substantial volume of entries (thousands of data points). Unfortunately, the search yielded no dataset with meaningful metadata available at a low cost. Consequently, I decided to generate the ground truth for the recommendation model using the OpenAI API. Specifically, I used the query below as input, with each of the aspects being picked randomly from a distribution based on theme. This allowed me to maintain all metadata for predictive models used later.

Create a haiku using no commas. Don't output anything except the haiku. Its theme should be: {SelectedTheme}. It should have word complexity of {WordComplexity}/5, emotional intensity of {EmotionalIntensity}/5, metaphorical depth of {MetaphoricalDepth}/5, archaic language usage of {ArchaicLanguage}/5, imagery of {Imagery}/5. Project Image 1

In theory, if the generative step were visible to users, user input validation using a lightweight model like Ada or Davinci could precede the use of a more resource-intensive model. This approach would ensure that inputs align with the task before invoking the costlier model for results. Checking for correct or divergent output could be done without ML, ensuring output matches expectation for the end user.

The generated ground truth comprised haiku organized by theme, accompanied by features such as Word Complexity, Emotional Intensity, Metaphorical Depth, Archaic Language, and Imagery, each rated on a scale of 1-5 initially. I explored various Natural Language Processing (NLP) models to extract these features from haiku. 12,000 data points were used for training, 2,000 for testing, and an additional 1,000 for assessing AWS performance monitoring. These final 1,000 data points were uniformly distributed to see what the result was with deliberately introduced model drift.

Initial attempts at training distilbert-base proved suboptimal in predicting the specified feature values. At times, accuracy was beneath a random uniform guess, likely due to the distributions used to generate the data. While other models like BART, RoBERTa, and ALBERT exhibited marginally improved performance, they didn't significantly outperform each other. Even when manually assessing each of these aspects' accuracy, results were not reassuring; the granularity between a rating of 2 and 3 or 3 and 4 didn't appear to carry significant meaning, whether assessed by man or machine. The choice to initially confine the values within the range of 1 to 5 was arbitrary.

To improve model intuitiveness I reduced the number of labels from 5 to 3. Following this change, model performance improved significantly. This increase is intuitive, as fewer options would also increase the probability of a random guess. Additionally, significant time was spent changing the hyper-parameters; particularly learning rate, number of epochs, batch size, and activation functions. Aggregate accuracy was as low as 23% across all 5 features in the case of 5 label distilbert models. Aggregate accuracy was as high as 62% across all 5 features in the case of 3 label RoBERTa models. An untrained bart-large-mnli was able to correctly assess the theme of the haiku 81% of the time, improving only marginally after fine-tuning.

Project Image 2

When developing recommendation models, I employed two content-based approaches. The first involved utilizing Word2Vec without incorporating metadata, while the second focused solely on metadata, excluding content. In the latter scenario, manual recommendations based on metadata employed Euclidean distance to determine similarity. Additionally, the theme was incorporated, introducing a parameter ranging from 1 to 3 to bring together items with similar subject matters in closer proximity.

Project Image 3

The deployment of these models on AWS was executed through an EC2 instance, leveraging Amazon SageMaker to integrate the fine-tuned RoBERTa and recommendation models as endpoints. In the data ingestion process facilitated by Apache Airflow, the RoBERTa model dynamically generates features when they are not provided, utilizing conditional DAGs. This ensures that, when invoked, the recommendation models always operate on complete data. SageMaker endpoints are configured to establish secure connections, complemented by the implementation of Model Monitor for continuous performance monitoring.

Projects

Making Microsoft's employee survey approach more actionable

Microsoft has a strong methodological process involving internal surveys used to test products and methods. My role was to support the Hybrid workstream which focused on improving employees' experience in a new working environment. The results of this workstream frequently fed into publicly available output, like shown on the right, as well as internal reporting to the units that manage portions of the experience. Our group was gaining great data on an important topic but was constantly coming up against the same problem; utilizing it effectively. With 2000+ responses each month, the amount and variety of data meant that just processing open-ended responses and updating newsletters took up most of my time. If we were going to utilize this wealth of data effectively, we required a new approach.

To address the challenge of efficiently utilizing the wealth of data collected through our internal surveys, I proposed a significant overhaul of the survey methodology. One of the key changes was to make the survey modular, breaking it down into smaller sections, each of which was bite sized and focused on a specific topic. This modular approach allowed us to serve different respondents different questions, gaining the same depth of information while reducing the number of items respondents see from dozens to fewer than a dozen and saving everyone time. By doing so, we not only gained wider coverage across various aspects of the employee experience but also increased response rates, as respondents were more likely to complete this shortened survey.

Another important modification was the shift from open-ended questions to closed-ended ones, designed to be more easily actionable. While open-ended questions provide valuable insights, they can be time-consuming to analyze and can result in a loss of structured data. By providing respondents with a set of predefined options in closed-ended questions, we ensured that we captured specific, quantifiable responses. However, we also recognized the importance of not losing valuable information that may be emergent. To address this concern, we included an "Other" option in each closed-ended question, allowing respondents to provide additional context or input when necessary. This approach struck a balance between structured data collection and the flexibility to capture unexpected insights. With these changes, I not only streamlined the survey process but also improved our ability to extract actionable insights from the data.

A/B Testing Chase's acquisition pipelines to increase monthly revenue by $20M

During my tenure at JP Morgan Chase, I supported a series of initiatives aimed at enhancing the digital user experience and boosting digital account acquisition through owned advertisements and links. I was responsible for supporting acquisition research across both public and private digital portals, with a focus on data-driven decision-making. Through rigorous A/B testing, we meticulously fine-tuned our strategies to ensure not only that more people were arriving to acquisition funnels but also that those that did arrive intended to open an account. This was atypical because the majority of the analysis on acquisition was not accounting for the full journey. Teams either considered conversion from owned media to the application or the start of the application to the end, which caused issues like many people clicking vague links to explore but not converting or offering products to customers we should know don't have the financial history to qualify for them.

One of the key achievements during this period was the successful implementation of personalized product recommendations for customers based on their existing holdings. By tailoring product offerings to individual customer profiles, we not only improved user satisfaction but also significantly increased digital account acquisition. Additionally, we revamped enrollment flows for various products to make them more user-friendly and intuitive, thereby removing barriers to entry and encouraging more customers to engage with our products. Other more incremental changes were made on a continuous basis. Each of these changes was tested to ensure improvements were marginal and sustained.

As a result of these strategic changes, we witnessed a 7% increase in total account volume year over year, primarily driven by an increase in deposit accounts, translating to approximately $20 million in revenue each month.

Project Image 1

Projects

Creating a data pipeline to analyze Microsoft's survey data programatically

During my early days at Microsoft, my primary focus was analyzing respondent data and adjusting mailers to inform stakeholders. It quickly became apparent that more time to research would allow me to uncover novel insights, which was a more valuable use of time. To address the analysis hurdle, I developed a series of scripts designed to automate various aspects of data management. These scripts streamlined the organization of respondent data, validated survey rating confidence intervals, populated mailer files and finally fine-tuned and deployed a BART-large-MNLI topic modeling solution to classify open-ended responses. Fine-tuning increased accuracy from 61% to 83%. This change in process not only streamlined our data analysis but also introduced an objective tracking system, eliminating the need for extensive manual work and allowing greater consistency as we no longer relied on human ratings.

As part of the evolution of this system, I further automated the process, ensuring that every week, incoming responses were seamlessly processed and analyzed in real-time. This automation allowed us to track and compare trends in employee comments over time, providing invaluable insights into the evolving sentiments and needs of our workforce. By implementing this workflow, we significantly enhanced our ability to understand employee feedback, ultimately contributing to a more responsive and employee-centric approach within Microsoft.

Project Image 1

Engineering a data lake for digital acquisitions at Chase

To facilitate a unified product tracking system across Chase, I spearheaded a significant technical initiative aimed at constructing a robust data lake harnessing behavioral clickstream data associated with customer acquisition. The underlying behavioral database of this acquisition data repository was substantial, given that Chase.com ranks among the top 50 most frequently visited websites in the United States, with 800 million monthly visits and 6 billion page views. The query times for this expansive database were prohibitively long due to their size, and the data within it was not structured in a user-friendly manner, necessitating analysts to perform complex table joins to extract meaningful insights.

My role involved architecting a faster, more intuitive database capable of analyzing acquisition activities across all consumer banking products. This process required a mastery of the data as well as the business. Data mastery was essential as it allowed selection of the right columns and unique indexes to speed up the process. Our efforts resulted in a significant boost in efficiency. Analysts experienced a more than 90% reduction in query time, allowing them to access crucial insights almost instantly, instead of in hours. Furthermore, we simplified the database structure, reducing the number of tables required to see an application journey from four to just one.

At the end of the project, I'd developed an efficient Teradata script that transformed the dataset into a structured table, guaranteeing data integrity, efficiency, and scalability. This automation ensured that the data remained up-to-date even in my absence. The resulting data table offered us the capability to filter and analyze pivotal fields like product types and customer demographics, as well as what specific link drove traffic, thereby providing invaluable insights into our digital acquisition strategies and revenue generation. Moreover, it enabled data-driven assessments of advertising effectiveness and optimizations in user experience, significantly contributing to the bank's overall success.

Project Image 1

Developing a customer segmentation approach to reduce marketing expenses by 20%

At Independence Blue Cross, I led a project focused on optimizing our direct mail marketing for health insurance enrollment. To do this, I employed Random Forest techniques to identify the most promising prospects for our mailers. The goal was to make our marketing campaigns more cost-effective and efficient. I used Random Forest to build predictive models that assessed the likelihood of individuals enrolling in health insurance after receiving our mailers. This involved tasks like data preprocessing, feature engineering, and model selection.

I fine-tuned the algorithm to identify prospects with the highest enrollment potential by analyzing various factors like demographics, health history, and past mailing responses. The results were significant. Through this data-driven approach, we managed to reduce our marketing expenses by over 20% while achieving the same enrollment results as in previous years. Targeting prospects with the highest likelihood of enrolling made our operations more efficient and improved our return on investment.

Project Image 1

Projects

Fixing Chase's platform-wide Interstitial feature

In my role at JP Morgan Chase, I played a pivotal part in identifying and addressing a critical issue related to the effectiveness of the "Announcements" feature which is intended to provide information on changes after a customer signs in. It began when I performed fundamental research to assess the feature's performance. What caught my attention was that the Announcements were consistently underperforming, falling far below the expected metrics. It became evident that a significantly lower number of users were actually seeing these Announcements, even though the feature was designed to reach everyone eventually. In some instances, the visibility rate for an Announcement was as low as 30%.

Upon digging deeper, I uncovered several key insights that shed light on the problem. First, I realized that all "Sign in interstitials," including "Announcements," were competing for priority in terms of display. Second, I discovered that the platform responsible for showcasing these interstitials wasn't under the jurisdiction of any internal team, resulting in a lack of controls and design guidelines. Third, a closer examination of other "Sign in interstitials" revealed a stark lack of design cohesion across the board. Lastly, I found that some of these interstitials were broken, repeatedly disrupting users instead of being dismissed after the initial display. User feedback in our Voice of Customer (VOC) channels confirmed that this issue was causing significant frustration.

To tackle this issue head-on, my team within the organization took charge of the "Sign in interstitials" platform. We implemented comprehensive design guidelines, ensuring consistency and effectiveness across all interstitials. Additionally, we fixed the broken ones, ensuring that they would function correctly and no longer disrupt users excessively. This initiative not only improved the user experience but also enhanced our ability to effectively communicate important announcements to our customers.

Project Image 1

Unifying Chase's digital acquisition approach

When I arrived at Chase there was an expectation that each product group (Checking, Savings, Credit Cards, Personal Loans, Mortgages, Auto Leases and Auto Loans) perform their own analysis, reporting and communication. In addition, because each group was working independently these groups often competed for resources to eachothers' detriment. As a result, the analysis product groups performed was not comparable to eachother and bankwide success was not able to be the priority.

To tackle this challenge, I took the lead in developing a cohesive data management framework designed to facilitate consistent performance evaluation across various units. At the core of this endeavor was the overhaul and standardization of transactional enrollment data, as well as behavioral clickstream data associated with the acquisition process. This undertaking involved in-depth research into the user journey, including meticulous tracking and documentation of all links and pages leading to customer applications. By tracing these paths, we constructed a comprehensive taxonomy and subsequently created a Teradata script. This script efficiently organized the vast Adobe dataset, containing billions of rows, into a structured acquisition table of thousands of rows. This achievement established a common framework for analysis, streamlining the process and enhancing our ability to assess performance consistently across diverse products.

The impact of this project had a profound effect on our understanding of our business. With the implementation of a standardized script and the establishment of a universal data repository, we gained the ability to easily compare each group's performance metrics. This newfound capability to engage in performance discussions using a common script, rather than relying on separate and delayed data sources, significantly enhanced our agility and decision-making abilities. It also provided us with a holistic "All Chase" view of our products, enabling us to prioritize products at a high level instead of locally. In essence, this organizational overhaul not only bolstered our data analytics capabilities but also facilitated more informed decision-making.

Project Image 1

Projects

Hiring 5 team members at Chase to rebuild a depleted team

During the Great Resignation in early 2022 my team was depleted significantly and suddenly. I took on a role in re-expanding our team's capacity. Over the course of just three months, I successfully recruited and onboarded five new team members, effectively growing our team from three to eight members. This endeavor was not just about increasing headcount; it was about strategically identifying and bringing on board individuals who would not only fit seamlessly into our team culture but also excel in their roles.

To achieve this, I developed an objective criteria to evaluate potential candidates. This criteria encompassed both a behavioral portion of the interview process and a coding assessment. It was essential for us to assess not only technical skills but also interpersonal qualities that would contribute to a positive and productive team dynamic. Notably, each of these new hires not only successfully integrated into the organization but also demonstrated outstanding performance in their respective roles, consistently delivering valuable contributions to our team's success. All five of the team members that were hired were still in the organization adding value one year later, proving that the decisions made were the correct ones.

Project Image 1

Teaching team members at Microsoft data skills to increase their productivity

During my tenure at Microsoft, I had the opportunity to play a pivotal role in enhancing the capabilities of our qualitative UX research team. Recognizing the increasing importance of data skills in our field, I took the initiative to teach team members the fundamentals of scripting and coding. This endeavor aimed not only to empower team members with new technical skills but also to leverage these skills to boost productivity and the overall performance of our team.

I ensured that the training covered the basics of scripting and coding, making it accessible and applicable to individuals with varying levels of technical experience. Through hands-on workshops and personalized coaching, team members acquired the necessary skills to manipulate and analyze data efficiently, enabling them to draw deeper insights from their research findings. The benefits of this initiative were multifaceted. Not only did team members gain a new skill set that made them more self-reliant in data-related tasks, but it also led to a significant improvement in our team's overall productivity. With the ability to process and analyze data independently, our research projects became more streamlined and efficient. This, in turn, allowed us to provide more timely and data-driven insights to inform product development decisions.

Project Image 1

Learn more about me.

Quantitative UX Researcher, Data Scientist & Mentor

Terron is a skilled data scientist and quantitative UX researcher with industry experience in healthcare, finance and technology who is dedicated to optimizing user experiences through data-driven insights and innovative research.

  • City: New York City
  • Email: TerronGrahamPro@Gmail.com
  • Undergrad: Pennsylvania State University
  • Grad: University of Pennsylvania

He is seeking challenging opportunities where he can serve as a thought leader and mentor, contributing to an organization's growth both in terms of business success and cultural development. Terron is driven by a passion for fostering innovation and empowering products and teams to reach their full potential.

Skills

UX Methods

Survey Methodology

A/B Testing

Persona Creation

R

Tidyverse

dplyr

ggplot

Python

Transformers

Pandas

Numpy

SQL

Tableau

Adobe Analytics

Google Analytics