Skip to main content
UNIVERSITY OF ILLINOIS URBANA-CHAMPAIGN
Toggle navigation
Research
Dropdown menu toggle
Artificial Intelligence
Arts and Humanities
Astrophysics
Digital Agriculture
Earth and Environment
Engineering
Health Sciences
Project Highlights
Expertise
Dropdown menu toggle
Compute Resources
Data Analytics
Facilities
Innovative Systems
Integrated Cyberinfrastructure
Program Administration
Software and Applications
User Services
Visualization
People
Dropdown menu toggle
Leadership
Directorates
Staff Directory
News & Events
Dropdown menu toggle
News
Calendar
Press Room
Tours
About
Dropdown menu toggle
Careers
Fellowships & Internships
Industry Partner Program
Institutional Partnerships
Diversity
History
Giving
Contact
Search
Search
Toggle navigation
Calendar
National Center for Supercomputing Applications WordPress Master Calendar
Share on Facebook
Tweet
Email
add to calendar
contact
add an event
View Full Calendar
NCSA staff who would like to submit an item for the calendar can email
newsdesk@ncsa.illinois.edu
.
AlphaTrans: A Neuro-Symbolic Compositional Approach for Repository-Level Code Translation and Validation
Event Type
Seminar/Symposium
Sponsor
PL/FM/SE
Location
0222 Siebel Center
Virtual
Date
Oct 4, 2024 2:00 - 3:00 pm
Speaker
Kaiyao Ke, UIUC and Adharsh Kamath, UIUC
Contact
Kristin Irle
E-Mail
kirle@illinois.edu
Phone
217-244-0229
Views
66
Originating Calendar
Siebel School Speakers Calendar
Talk 1
(2-2.30):
Title
: AlphaTrans: A Neuro-Symbolic Compositional Approach for Repository-Level Code Translation and Validation
Speaker
: Kaiyao Ke, UIUC
Abstract
: Code translation transforms programs from one programming language (PL) to another. One prominent use case is application modernization to enhance maintainability and reliability. Several rule-based transpilers have been designed to automate code translation between different pairs of PLs. However, the rules can become obsolete as the PLs evolve and cannot generalize to other PLs. Recent studies have explored the automation of code translation using Large Language Models (LLMs). One key observation is that such techniques may work well for crafted benchmarks but fail to generalize to the scale and complexity of real-world projects with inter- and intra-class dependencies, custom types, PL-specific features, etc. We propose AlphaTrans, a neuro-symbolic approach to automate repository-level code translation. AlphaTrans translates both source and test code, and employs multiple levels of validation to ensure the translation preserves the functionality of the source program. To break down the problem for LLMs, AlphaTrans leverages program analysis to decompose the program into fragments and translates them in the reverse call order. We leveraged AlphaTrans to translate ten real-world open-source projects consisting of 〈836, 8575, 2719〉 classes, methods, and tests. AlphaTrans translated the entire repository of these projects consisting of 6899 source code fragments. 99.1% of the translated code fragments are syntactically correct, and AlphaTrans validates the translations’ runtime behavior and functional correctness for 25.8%. On average, the integrated translation and validation take 36 hours (min=4, max=122) to translate a project, showing its scalability in practice. For the syntactically or semantically incorrect translations, AlphaTrans generates a report including existing translation, stack trace, test errors, or assertion failures. We provided these artifacts to two developers to fix the translation bugs in four projects. They were able to fix the issues in 20.1 hours on average (5.5 hours
for the smallest and 34 hours for the largest project) and achieve all passing tests. Without AlphaTrans, translating and validating such big projects could take weeks, if not months.
Talk 2
(2.30 - 3):
Title
: Leveraging LLMs for Program Verification
Speaker
: Adharsh Kamath, UIUC
Abstract
: We investigate code reasoning skills of Large Language Models (LLMs) in the context of formal program verification. Specifically, we look at the problem of inferring loop invariants as well as ranking functions for proving safety properties and loop termination, respectively. We demonstrate how emergent capabilities of LLMs can be exploited through a combination of prompting techniques as well as by using them in conjunction with symbolic algorithms. We curate and contribute a dataset of verification problems inspired by past work. We perform a rigorous evaluation on this dataset to establish that LLMs have the potential to improve state-of-the-art in program verification.
link for robots only
Back to top