HooverAI2

image tooltip here

In 2020, I launched Hoover AI2 (Hoover Applied Artificial Intelligence Initiative) with support from a Hoover Institution’s group of senior fellows and the Hoover Instituion Library and Archives. The Initiative motivated me to learn more AI by auditing a wide range of courses on artificial intelligence (machine learning, natural language processing and convolutional neaural networks) at the Stanford’s School of Engineering, Department of Economics and Department of Political Science. I tried to establish stronger bonds between Stanford’s faculty, students and the Hoover Institution by initiating various projects related to AI and VR/AR technologies.

The primary goal of Hoover AI2 was to use AI in order to enhance Hoover Institution’s mission, support its critical scholarship, and assist the daily operations of the Hoover Library & Archives. I strongly believe that with the projects like this, the Institution and other think tanks can bridge the growing gap between artificial intelligence research in public policy and hands-on artificial intelligence projects by engineering talents.

List of Projects

Data processing software
Goal: Assistance with a variety of data processing tasks to help standardize and accelerate metadata processing and record keeping.
[GitHub] [WebApp]

Recommendation system for archival collections based on content-based filtering algorithm
Goal: Development of a prototype of a recommendation system for the Hoover Library and Archives based on semantic analysis of metadata, providing researchers with customized recommendations for collections.
[WebApp]

Recommendation system for archival collections based on user-based collaborative filtering algorithm
Goal: Development of a prototype of a recommendation system for the Hoover Library and Archives based on the user data (2000-present), providing researchers with customized recommendations for collections.
[WebApp]

Experimentation with AI image processing tools
Goal: Development and/or experimentation with image processing tools. It may include image manipulation (cropping, rotation etc.), colorization of archival black & white images, image classification tasks, and image captioning tasks (i.e. caption generation describing the contents of an image). Assistance of the Hoover Digital Services & Systems with different projects related to named-entity recognition, OCR/HCR technologies, image analysis, using natural language processing and image processing software.
[WebApp]

Exploration of textual data
Goal: This project is designed to create the system that relies on the state-of-the-art natural language processing methodology that would generate speeches in certain style where no one can tell the difference between human written speeches and the computer generated.

a) a prototype of a chatbot (Dialogflow, Amazon Alexa) including recommendation system;
[WebApp]

b) experimentation with GPT-3 language model: fine-tuning, generation of creative content (political speeches, stories, ideas, questions), text classification (sentiment analysis of large corpus of text with respect to people, events, organizations), information retrieval (question answering), automatic text summarization;
[GitHub]

c) experimentation with GPT-2 language model: the model fine-tuned on the “Firing Line” collection was capable of generating dialogues between William Buckley and his guests based on the clue received.
[Presentation]