Introduction Motivation Implementation Evaluation Conc.
Hybridize Functions: A Tool for Automatically
Refactoring Imperative Deep Learning Programs to
Graph Execution
Raffi Khatchadourian1,2
Tatiana Castro Vélez2
Mehdi
Bagherzadeh3
Nan Jia2
Anita Raja1,2
1
City University of New York (CUNY) Hunter College, USA
2
City University of New York (CUNY) Graduate Center, USA
3
Oakland University, USA
International Conference on Fundamental Approaches to Software
Engineering
May 5, 2025, Hamilton, Canada
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 1 / 18
Introduction Motivation Implementation Evaluation Conc.
Deep Learning Systems & Run-time Performance
Machine Learning (ML), including Deep Learning (DL), systems are
pervasive.
As datasets grow, efficiency becomes essential to support
responsiveness [Zhou et al., 2020].
For efficiency, DL frameworks have traditionally embraced a deferred
execution-style supporting graph-based (DNN) computation.
Scalable, but development is . . .
Error-prone.
Cumbersome.
Produces programs that are difficult to debug.
Because graph computation executes statements in a non-imperative
order, traditional SE tools cannot help troubleshoot bugs [Arpteg
et al., 2018].
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 2 / 18
TensorFlow Deferred Execution-style Code
1 # Build a graph.
2 a = tf.constant(5.0)
3 b = tf.constant(6.0)
4 c = a * b
5
6 # Launch graph in a session.
7 sess = tf.Session()
8
9 # Evaluate the tensor `c`.
10 print(sess.run(c)) # prints 30.0
Lines 2–4 build a computation graph.
Line 4 does not execute until the Session is run on line 10.
No native support common imperative program constructs, e.g.,
iteration.
TensorFlow Deferred Execution-style Code
1 # Build a graph.
2 a = tf.constant(5.0)
3 b = tf.constant(6.0)
4 c = a * b
5
6 # Launch graph in a session.
7 sess = tf.Session()
8
9 # Evaluate the tensor `c`.
10 print(sess.run(c)) # prints 30.0
Lines 2–4 build a computation graph.
Line 4 does not execute until the Session is run on line 10.
No native support common imperative program constructs, e.g.,
iteration.
TensorFlow Deferred Execution-style Code
1 # Build a graph.
2 a = tf.constant(5.0)
3 b = tf.constant(6.0)
4 c = a * b
5
6 # Launch graph in a session.
7 sess = tf.Session()
8
9 # Evaluate the tensor `c`.
10 print(sess.run(c)) # prints 30.0
Lines 2–4 build a computation graph.
Line 4 does not execute until the Session is run on line 10.
No native support common imperative program constructs, e.g.,
iteration.
Introduction Motivation Implementation Evaluation Conc.
Imperative DL Programming, Eager Execution, &
Hybridization
Imperative DL frameworks (e.g., TensorFlow Eager,Keras,PyTorch)
encouraging eager execution are more natural, less error-prone, and
easier to debug.
Sacrifices run-time performance.
Thus, hybrid approaches (e.g., Hybridize,TorchScript,AutoGraph)
have surfaced that:
Execute imperative DL programs as static graphs at run-time.
Are integrated into mainstream DL frameworks (e.g.,
TensorFlow,MXNet,PyTorch).
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 4 / 18
Eager TensorFlow Imperative (OO) DL Model Code
1 class SequentialModel(tf.keras.Model):
2 def __init__(self, **kwargs):
3 super(SequentialModel, self).__init__(...)
4 self.flatten = layers.Flatten(input_shape=(28, 28))
5 num_layers = 100 # Add many small layers.
6 self.layers = [layers.Dense(64, activation = "relu") for n in
range(num_layers)]
,
→
7 self.dropout = tf.keras.layers.Dropout(0.2)
8 self.dense_2 = tf.keras.layers.Dense(10)
9
10
11 def __call__(self, x):
12 x = self.flatten(x)
13 for layer in self.layers:
14 x = layer(x)
15 x = self.dropout(x)
16 x = self.dense_2(x)
17 return x
Hybridized TensorFlow Imperative (OO) DL Model Code
1 class SequentialModel(tf.keras.Model):
2 def __init__(self, **kwargs):
3 super(SequentialModel, self).__init__(...)
4 self.flatten = layers.Flatten(input_shape=(28, 28))
5 num_layers = 100 # Add many small layers.
6 self.layers = [layers.Dense(64, activation = "relu") for n in
range(num_layers)]
,
→
7 self.dropout = tf.keras.layers.Dropout(0.2)
8 self.dense_2 = tf.keras.layers.Dense(10)
9
10 @tf.function(...) # Executes model as graph (optional args).
11 def __call__(self, x):
12 x = self.flatten(x)
13 for layer in self.layers:
14 x = layer(x)
15 x = self.dropout(x)
16 x = self.dense_2(x)
17 return x
On line 10, AutoGraph used to potentially enhance performance.
Decorates model’s call() method with @tf.function.
At run-time, call()’s execution will be “traced” (∼9.22 speedup).
Hybridized TensorFlow Imperative (OO) DL Model Code
1 class SequentialModel(tf.keras.Model):
2 def __init__(self, **kwargs):
3 super(SequentialModel, self).__init__(...)
4 self.flatten = layers.Flatten(input_shape=(28, 28))
5 num_layers = 100 # Add many small layers.
6 self.layers = [layers.Dense(64, activation = "relu") for n in
range(num_layers)]
,
→
7 self.dropout = tf.keras.layers.Dropout(0.2)
8 self.dense_2 = tf.keras.layers.Dense(10)
9
10 @tf.function(...) # Executes model as graph (optional args).
11 def __call__(self, x):
12 x = self.flatten(x)
13 for layer in self.layers:
14 x = layer(x)
15 x = self.dropout(x)
16 x = self.dense_2(x)
17 return x
On line 10, AutoGraph used to potentially enhance performance.
Decorates model’s call() method with @tf.function.
At run-time, call()’s execution will be “traced” (∼9.22 speedup).
Hybridized TensorFlow Imperative (OO) DL Model Code
1 class SequentialModel(tf.keras.Model):
2 def __init__(self, **kwargs):
3 super(SequentialModel, self).__init__(...)
4 self.flatten = layers.Flatten(input_shape=(28, 28))
5 num_layers = 100 # Add many small layers.
6 self.layers = [layers.Dense(64, activation = "relu") for n in
range(num_layers)]
,
→
7 self.dropout = tf.keras.layers.Dropout(0.2)
8 self.dense_2 = tf.keras.layers.Dense(10)
9
10 @tf.function(...) # Executes model as graph (optional args).
11 def __call__(self, x):
12 x = self.flatten(x)
13 for layer in self.layers:
14 x = layer(x)
15 x = self.dropout(x)
16 x = self.dense_2(x)
17 return x
On line 10, AutoGraph used to potentially enhance performance.
Decorates model’s call() method with @tf.function.
At run-time, call()’s execution will be “traced” (∼9.22 speedup).
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Hybridization Drawbacks
Needs non-trivial, specialized metadata [Jeong et al., 2019].
Exhibit limitations and known issues with native program constructs.
Subtle considerations required to:
Specify (decorate) the functions to be migrated.
Make code amenable to safe, accurate, and efficient graph execution.
Avoid performance bottlenecks and semantically inequivalent
results [Cao et al., 2022,Castro Vélez et al., 2022].
Manual analysis and refactoring (semantics-preserving,
source-to-source transformation) for optimal results can be error-
and omission-prone [Dig et al., 2009].
Further complicated by:
Increasing Object-Orientation (OO) in DL model code (e.g.., Keras).
Dynamically-typed languages (e.g., Python).
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 7 / 18
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Imperative DL Code With Python Side-effects
1 @tf.function
2 def f(x):
3 print("Input: ", x)
4 f(1)
5 f(1)
6 f(2)
Output (expecting 1, 1, 2):
Input: 1
Input: 2
Side-effect producing, native Python statements, e.g., printing, list
appending, global variable mutation, are problematic for
tf.function-decorated functions (i.e., “tf.functions”).
Because they are traced, a function’s behavior is “etched” into its
corresponding graph.
Can have unexpectant results, executing side-effects multiple times
or not at all.
Side-effects occur when tf.functions are called the first time.
Subsequent calls with similar arguments execute the graph instead.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 8 / 18
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Imperative (OO) DL Code With Python Side-effects
1 class Model(tf.Module):
2 def __init__(self):
3 self.v = tf.Variable(0)
4 self.counter = 0
5
6 @tf.function
7 def __call__(self):
8 if self.counter == 0:
9 self.counter += 1
10 self.v.assign_add(1)
11 return self.v
12 m = Model()
13 for n in range(3):
14 print(m().numpy())
Output (expecting 1, 1, 1):
1
2
3
A model uses a counter to safeguard a variable incrementation.
The initial value of counter (line 4), however, is captured during
tracing upon the first model invocation (line 14).
Variable v is incremented unconditionally (line 10) each time the
model is invoked.
Such problems are common in migrating to graph execution.
Can result in suspicious numerical results or lower performance.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Imperative (OO) DL Code With Python Side-effects
1 class Model(tf.Module):
2 def __init__(self):
3 self.v = tf.Variable(0)
4 self.counter = 0
5
6 @tf.function
7 def __call__(self):
8 if self.counter == 0:
9 self.counter += 1
10 self.v.assign_add(1)
11 return self.v
12 m = Model()
13 for n in range(3):
14 print(m().numpy())
Output (expecting 1, 1, 1):
1
2
3
A model uses a counter to safeguard a variable incrementation.
The initial value of counter (line 4), however, is captured during
tracing upon the first model invocation (line 14).
Variable v is incremented unconditionally (line 10) each time the
model is invoked.
Such problems are common in migrating to graph execution.
Can result in suspicious numerical results or lower performance.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Imperative (OO) DL Code With Python Side-effects
1 class Model(tf.Module):
2 def __init__(self):
3 self.v = tf.Variable(0)
4 self.counter = 0
5
6 @tf.function
7 def __call__(self):
8 if self.counter == 0:
9 self.counter += 1
10 self.v.assign_add(1)
11 return self.v
12 m = Model()
13 for n in range(3):
14 print(m().numpy())
Output (expecting 1, 1, 1):
1
2
3
A model uses a counter to safeguard a variable incrementation.
The initial value of counter (line 4), however, is captured during
tracing upon the first model invocation (line 14).
Variable v is incremented unconditionally (line 10) each time the
model is invoked.
Such problems are common in migrating to graph execution.
Can result in suspicious numerical results or lower performance.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Imperative (OO) DL Code With Python Side-effects
1 class Model(tf.Module):
2 def __init__(self):
3 self.v = tf.Variable(0)
4 self.counter = 0
5
6 @tf.function
7 def __call__(self):
8 if self.counter == 0:
9 self.counter += 1
10 self.v.assign_add(1)
11 return self.v
12 m = Model()
13 for n in range(3):
14 print(m().numpy())
Output (expecting 1, 1, 1):
1
2
3
A model uses a counter to safeguard a variable incrementation.
The initial value of counter (line 4), however, is captured during
tracing upon the first model invocation (line 14).
Variable v is incremented unconditionally (line 10) each time the
model is invoked.
Such problems are common in migrating to graph execution.
Can result in suspicious numerical results or lower performance.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
Introduction Motivation Implementation Evaluation Conc. Drawbacks
Imperative (OO) DL Code With Python Side-effects
1 class Model(tf.Module):
2 def __init__(self):
3 self.v = tf.Variable(0)
4 self.counter = 0
5
6 @tf.function
7 def __call__(self):
8 if self.counter == 0:
9 self.counter += 1
10 self.v.assign_add(1)
11 return self.v
12 m = Model()
13 for n in range(3):
14 print(m().numpy())
Output (expecting 1, 1, 1):
1
2
3
A model uses a counter to safeguard a variable incrementation.
The initial value of counter (line 4), however, is captured during
tracing upon the first model invocation (line 14).
Variable v is incremented unconditionally (line 10) each time the
model is invoked.
Such problems are common in migrating to graph execution.
Can result in suspicious numerical results or lower performance.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
Introduction Motivation Implementation Evaluation Conc. Insight Refactorings Approach
Problem Insight
Although imperative DL code is sequentially executed, hybridizing code
resembles parallelizing sequential code.
Example
To void unexpected behavior, like concurrent programs, hybrid functions
should avoid side-effects.
Idea
Adapt concepts from automated refactorings that parallelize sequential
code, e.g., Streaming APIs [Khatchadourian et al., 2019].
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 10 / 18
Introduction Motivation Implementation Evaluation Conc. Insight Refactorings Approach
Refactorings
Two new, fully-automated refactorings:
Convert Eager Function to Hybrid Transforms otherwise
eagerly-executed imperative (Python) DL code for
enhanced run-time performance.
Automatically specifies (decorates) whether and how
code could be reliably and efficiently executed as
graphs at run-time.
Avoids hybridizing code under certain conditions
(e.g., side-effecting code) to preserve semantics.
Optimize Hybrid Function Transforms code already running as graphs for
optimal run-time performance.
Possibly dehybridize code when eager execution could
be faster (e.g., graph “retracing”).
Issues refactoring “warnings” when hybrid code may
have unexpected results but refactoring is not
possible to due semantics preservation.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 11 / 18
Approach Highlights
Novel tensor analysis for imperative DL code.
Current analyzers work on only procedural (TF 1) code.
Modernization of WALA Ariadne [Dolby et al., 2018] for imperative
(TF 2) code.
Implemented as a PyDev Eclipse IDE plug-in [Zadrozny, 2023].
Integrates Ariadne for tensor type inference analysis.
Leverages complementary speculative analysis [Zhou et al., 2020]
using contextual DL keywords for difficult static cases.
Architecture & Dependencies
Eclipse leveraged for its refactoring framework and test
engine [Bäumer et al., 2001].
PyDev used for efficient indexing, refactoring support, and that it is
open-source for all Python development.
WALA used for static analyses (ModRef) used to build our
side-effect analysis.
WALA Ariadne used for Python analysis, tensor type inference, and
(TensorFlow) library modeling.
Figure: Screenshot of the Hybridize Functions refactoring preview wizard.
Introduction Motivation Implementation Evaluation Conc. Insight Refactorings Approach
Challenges Addressed
Reworked much of the existing Java (JDT) refactoring tooling to
work with Python.
Integrated Ariadne with PyDev due to its excellent and long-lived
refactoring support for Python, including refactoring preview pane,
element GUI selection, and refactoring undo history.
Augmented Ariadne to analyze imperative Deep Learning (Python)
code by expanding XML summaries to support TensorFlow 2 APIs.
Added support for Python constructs commonly used in modern
imperative DL programs.
Correlated varying intermediate representations (IRs) with the
original Python source code for transformation.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 15 / 18
Introduction Motivation Implementation Evaluation Conc. Insight Refactorings Approach
Modernizing Ariadne: New Enhancements
Python module packages.
Wild card imports.
Intra-package references (relative imports; from .. import X).
Package initialization scripts.
Automatic unit test entry points discovery.
Non-scalar tensor dataset [Google LLC, 2023] iteration.
Modeling of additional libraries.
Static and class methods analysis.
Analysis of custom decorators.
Callable object (functor) analysis (used in Keras).
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 16 / 18
Evaluation Summary
Analyzed 19 open-source Python imperative DL systems.
Varying size and domain.
Ranging from 0.12 to 36.72 KSLOC.
Refactored 42.56% of 766 functions despite conservatism.
Run-time Performance Evaluation Summary
Measured an average relative model training speedup of 2.16.
Memory consumption measurement pending.
Differences in model accuracy and loss before and after refactoring
were negligible.
Introduction Motivation Implementation Evaluation Conc.
Conclusion
Imperative DL code is easier to debug, write, and maintain.
Comes at the expense of (run-time) performance.
Hybridization bridges the gap between eager and graph execution.
Optimal performance and semantics preservation is non-trivial.
Our Work
Open-source, automated refactoring PyDev Eclipse plug-in that
assists developers with writing optimal imperative DL Python code.
Integrates an Eclipse refactoring with WALA Ariadne static analyses.
Future Work
More advanced container-based analyses.
Automatically split functions.
First-class hybrid functions.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
Introduction Motivation Implementation Evaluation Conc.
For Further Reading I
Abadi, Martín et al. (2016). “TensorFlow: A System for Large-Scale Machine Learning”. In: Symposium on
Operating Systems Design and Implementation.
Agrawal, Akshay et al. (2019). TensorFlow Eager: A Multi-Stage, Python-Embedded DSL for Machine
Learning. arXiv: 1903.01855 [cs.PL].
Apache (Apr. 8, 2021). Hybridize. Apache MXNet documentation. url:
https://mxnet.apache.org/versions/1.8.0/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html (visited
on 04/08/2021).
Arpteg, A., B. Brinne, L. Crnkovic-Friis, and J. Bosch (2018). “Software Engineering Challenges of Deep
Learning”. In: Euromicro Conference on Software Engineering and Advanced Applications. IEEE, pp. 50–59.
doi: 10.1109/SEAA.2018.00018.
Bäumer, Dirk, Erich Gamma, and Adam Kiezun (Oct. 2001). “Integrating refactoring support into a Java
development tool”. url: http://people.csail.mit.edu/akiezun/companion.pdf (visited on 09/10/2024).
Cao, Junming, Bihuan Chen, Chao Sun, Longjie Hu, Shuaihong Wu, and Xin Peng (2022). “Understanding
Performance Problems in Deep Learning Systems”. In: FSE. FSE ’22. ACM, pp. 357–369. doi:
10.1145/3540250.3549123.
Castro Vélez, Tatiana, Raffi Khatchadourian, Mehdi Bagherzadeh, and Anita Raja (May 2022). “Challenges
in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study”. In: MSR. MSR
’22. ACM/IEEE. ACM. doi: 10.1145/3524842.3528455.
Chen, Tianqi, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu,
Chiyuan Zhang, and Zheng Zhang (2015). “MXNet: A Flexible and Efficient Machine Learning Library for
Heterogeneous Distributed Systems”. In: Workshop on Machine Learning Systems at NIPS. arXiv: 1512.01274
[cs.DC].
Chollet, François (2020). Deep Learning with Python. 2nd ed. Manning.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
Introduction Motivation Implementation Evaluation Conc.
For Further Reading II
Dig, Danny, John Marrero, and Michael D. Ernst (2009). “Refactoring sequential Java code for concurrency
via concurrent libraries”. In: ICSE, pp. 397–407. doi: 10.1109/ICSE.2009.5070539.
Dilhara, Malinda, Ameya Ketkar, Nikhith Sannidhi, and Danny Dig (2022). “Discovering Repetitive Code
Changes in Python ML Systems”. In: ICSE. ICSE ’22.
Dolby, Julian, Avraham Shinnar, Allison Allain, and Jenna Reinen (2018). “Ariadne. Analysis for Machine
Learning Programs”. In: MAPL. ACM SIGPLAN. ACM, pp. 1–10. doi: 10.1145/3211346.3211349.
Eclipse Foundation (June 2024). Eclipse IDE. url: https://eclipseide.org/ (visited on 09/10/2024).
Facebook Inc. (2019). PyTorch. TorchScript. en. url: https://pytorch.org/docs/stable/jit.html (visited on
02/19/2021).
Google LLC (Mar. 17, 2023). tf.data.Dataset. TensorFlow. Version 2.9.3. url:
https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/data/Dataset (visited on 12/15/2023).
Jeong, Eunji, Sungwoo Cho, Gyeong-In Yu, Joo Seong Jeong, Dong-Jin Shin, Taebum Kim, and
Byung-Gon Chun (July 2019). “Speculative Symbolic Graph Execution of Imperative Deep Learning
Programs”. In: SIGOPS Oper. Syst. Rev. 53.1, pp. 26–33. issn: 0163-5980. doi: 10.1145/3352020.3352025.
Khatchadourian, Raffi, Yiming Tang, Mehdi Bagherzadeh, and Syed Ahmed (2019). “Safe Automated
Refactoring for Intelligent Parallelization of Java 8 Streams”. In: ICSE. ICSE ’19. IEEE Press, pp. 619–630.
doi: 10.1109/ICSE.2019.00072.
Kim, Miryung, Thomas Zimmermann, and Nachiappan Nagappan (Nov. 2012). “A Field Study of
Refactoring Challenges and Benefits”. In: FSE. ACM. doi: 10.1145/2393596.2393655.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
Introduction Motivation Implementation Evaluation Conc.
For Further Reading III
Moldovan, Dan, James M. Decker, Fei Wang, Andrew A. Johnson, Brian K. Lee, Zachary Nado, D. Sculley,
Tiark Rompf, and Alexander B. Wiltschko (2019). AutoGraph: Imperative-style Coding with Graph-based
Performance. arXiv: 1810.08061 [cs.PL].
Negara, Stas, Nicholas Chen, Mohsen Vakilian, Ralph E. Johnson, and Danny Dig (2013). “A Comparative
Study of Manual and Automated Refactorings”. In: ECOOP. Ed. by Giuseppe Castagna. Berlin, Heidelberg:
Springer Berlin Heidelberg, pp. 552–576. isbn: 978-3-642-39038-8.
OpenAI, Inc. (Aug. 18, 2023). ChatGPT. url: https://chat.openai.com (visited on 08/18/2023).
Paszke, Adam et al. (Dec. 3, 2019). PyTorch: An Imperative Style, High-Performance Deep Learning
Library. arXiv: 1912.01703 [cs.LG].
WALA (Sept. 8, 2024). T.J. Watson Libraries for Analysis. original-date: 2012-04-05T18:57:03Z. url:
https://github.com/wala/WALA (visited on 09/10/2024).
Zadrozny, Fabio (Apr. 15, 2023). PyDev. url: https://www.pydev.org (visited on 05/31/2023).
Zhou, Weijie, Yue Zhao, Guoqiang Zhang, and Xipeng Shen (2020). “HARP: Holistic Analysis for
Refactoring Python-Based Analytics Programs”. In: ICSE. doi: 10.1145/3377811.3380434.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
Appendix Static Analysis Refactoring LLMs Notebooks
Appendix
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 1 / 6
Appendix Static Analysis Refactoring LLMs Notebooks
Why Static Analysis?
Refactorings must operate on (at least some) static information.
Must eventually transform the source code.
May eventually integrate hybrid analyses to resolve difficult static
cases.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 2 / 6
Appendix Static Analysis Refactoring LLMs Notebooks
Why Automated Refactoring?
In general, such problems may also be handled by compilers or
runtimes; however, refactoring has several benefits:
Gives developers more control over where the optimizations take
place and making graph execution explicit.
Can be issued multiple times, e.g., prior to major releases.
Unlike static checkers, they transform source code, a task that can
be otherwise error-prone and involve subtle nuances.
Refactorings can act like recommendation systems, which is
important for analyzing and transforming programs written in
dynamic languages where static assumptions may be easily violated!
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 3 / 6
Appendix Static Analysis Refactoring LLMs Notebooks
Refactoring Developer Adoption
Developers generally underuse automated refactorings [Kim et al.,
2012,Negara et al., 2013].
Data scientists and engineers may be more open to using automated
(refactoring) tools.
Our approach will be fully automated with minimal barrier to entry.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 4 / 6
Appendix Static Analysis Refactoring LLMs Notebooks
LLMs & Big Data Refactoring
LLMs [OpenAI, Inc., 2023] can also perform refactorings.
Other Big Data-driven refactorings [Dilhara et al., 2022] are exciting
and promising.
Obtaining a (correct) dataset large enough to automatically extract
the proposed refactorings is challenging as developers struggle with
(manually) migrating DL code to graph execution [Castro Vélez
et al., 2022].
LLM inference capabilities are currently limited.
LLMs have a token limitation.
Hybridization requires interprocedural analysis.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 5 / 6
Appendix Static Analysis Refactoring LLMs Notebooks
Notebook Support
We plan to investigate notebook support in the future.
We envision the approach to be used on (larger) DL systems,
consisting of multiple files.
Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 6 / 6

Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Learning Programs to Graph Execution

  • 1.
    Introduction Motivation ImplementationEvaluation Conc. Hybridize Functions: A Tool for Automatically Refactoring Imperative Deep Learning Programs to Graph Execution Raffi Khatchadourian1,2 Tatiana Castro Vélez2 Mehdi Bagherzadeh3 Nan Jia2 Anita Raja1,2 1 City University of New York (CUNY) Hunter College, USA 2 City University of New York (CUNY) Graduate Center, USA 3 Oakland University, USA International Conference on Fundamental Approaches to Software Engineering May 5, 2025, Hamilton, Canada Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 1 / 18
  • 2.
    Introduction Motivation ImplementationEvaluation Conc. Deep Learning Systems & Run-time Performance Machine Learning (ML), including Deep Learning (DL), systems are pervasive. As datasets grow, efficiency becomes essential to support responsiveness [Zhou et al., 2020]. For efficiency, DL frameworks have traditionally embraced a deferred execution-style supporting graph-based (DNN) computation. Scalable, but development is . . . Error-prone. Cumbersome. Produces programs that are difficult to debug. Because graph computation executes statements in a non-imperative order, traditional SE tools cannot help troubleshoot bugs [Arpteg et al., 2018]. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 2 / 18
  • 3.
    TensorFlow Deferred Execution-styleCode 1 # Build a graph. 2 a = tf.constant(5.0) 3 b = tf.constant(6.0) 4 c = a * b 5 6 # Launch graph in a session. 7 sess = tf.Session() 8 9 # Evaluate the tensor `c`. 10 print(sess.run(c)) # prints 30.0 Lines 2–4 build a computation graph. Line 4 does not execute until the Session is run on line 10. No native support common imperative program constructs, e.g., iteration.
  • 4.
    TensorFlow Deferred Execution-styleCode 1 # Build a graph. 2 a = tf.constant(5.0) 3 b = tf.constant(6.0) 4 c = a * b 5 6 # Launch graph in a session. 7 sess = tf.Session() 8 9 # Evaluate the tensor `c`. 10 print(sess.run(c)) # prints 30.0 Lines 2–4 build a computation graph. Line 4 does not execute until the Session is run on line 10. No native support common imperative program constructs, e.g., iteration.
  • 5.
    TensorFlow Deferred Execution-styleCode 1 # Build a graph. 2 a = tf.constant(5.0) 3 b = tf.constant(6.0) 4 c = a * b 5 6 # Launch graph in a session. 7 sess = tf.Session() 8 9 # Evaluate the tensor `c`. 10 print(sess.run(c)) # prints 30.0 Lines 2–4 build a computation graph. Line 4 does not execute until the Session is run on line 10. No native support common imperative program constructs, e.g., iteration.
  • 6.
    Introduction Motivation ImplementationEvaluation Conc. Imperative DL Programming, Eager Execution, & Hybridization Imperative DL frameworks (e.g., TensorFlow Eager,Keras,PyTorch) encouraging eager execution are more natural, less error-prone, and easier to debug. Sacrifices run-time performance. Thus, hybrid approaches (e.g., Hybridize,TorchScript,AutoGraph) have surfaced that: Execute imperative DL programs as static graphs at run-time. Are integrated into mainstream DL frameworks (e.g., TensorFlow,MXNet,PyTorch). Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 4 / 18
  • 7.
    Eager TensorFlow Imperative(OO) DL Model Code 1 class SequentialModel(tf.keras.Model): 2 def __init__(self, **kwargs): 3 super(SequentialModel, self).__init__(...) 4 self.flatten = layers.Flatten(input_shape=(28, 28)) 5 num_layers = 100 # Add many small layers. 6 self.layers = [layers.Dense(64, activation = "relu") for n in range(num_layers)] , → 7 self.dropout = tf.keras.layers.Dropout(0.2) 8 self.dense_2 = tf.keras.layers.Dense(10) 9 10 11 def __call__(self, x): 12 x = self.flatten(x) 13 for layer in self.layers: 14 x = layer(x) 15 x = self.dropout(x) 16 x = self.dense_2(x) 17 return x
  • 8.
    Hybridized TensorFlow Imperative(OO) DL Model Code 1 class SequentialModel(tf.keras.Model): 2 def __init__(self, **kwargs): 3 super(SequentialModel, self).__init__(...) 4 self.flatten = layers.Flatten(input_shape=(28, 28)) 5 num_layers = 100 # Add many small layers. 6 self.layers = [layers.Dense(64, activation = "relu") for n in range(num_layers)] , → 7 self.dropout = tf.keras.layers.Dropout(0.2) 8 self.dense_2 = tf.keras.layers.Dense(10) 9 10 @tf.function(...) # Executes model as graph (optional args). 11 def __call__(self, x): 12 x = self.flatten(x) 13 for layer in self.layers: 14 x = layer(x) 15 x = self.dropout(x) 16 x = self.dense_2(x) 17 return x On line 10, AutoGraph used to potentially enhance performance. Decorates model’s call() method with @tf.function. At run-time, call()’s execution will be “traced” (∼9.22 speedup).
  • 9.
    Hybridized TensorFlow Imperative(OO) DL Model Code 1 class SequentialModel(tf.keras.Model): 2 def __init__(self, **kwargs): 3 super(SequentialModel, self).__init__(...) 4 self.flatten = layers.Flatten(input_shape=(28, 28)) 5 num_layers = 100 # Add many small layers. 6 self.layers = [layers.Dense(64, activation = "relu") for n in range(num_layers)] , → 7 self.dropout = tf.keras.layers.Dropout(0.2) 8 self.dense_2 = tf.keras.layers.Dense(10) 9 10 @tf.function(...) # Executes model as graph (optional args). 11 def __call__(self, x): 12 x = self.flatten(x) 13 for layer in self.layers: 14 x = layer(x) 15 x = self.dropout(x) 16 x = self.dense_2(x) 17 return x On line 10, AutoGraph used to potentially enhance performance. Decorates model’s call() method with @tf.function. At run-time, call()’s execution will be “traced” (∼9.22 speedup).
  • 10.
    Hybridized TensorFlow Imperative(OO) DL Model Code 1 class SequentialModel(tf.keras.Model): 2 def __init__(self, **kwargs): 3 super(SequentialModel, self).__init__(...) 4 self.flatten = layers.Flatten(input_shape=(28, 28)) 5 num_layers = 100 # Add many small layers. 6 self.layers = [layers.Dense(64, activation = "relu") for n in range(num_layers)] , → 7 self.dropout = tf.keras.layers.Dropout(0.2) 8 self.dense_2 = tf.keras.layers.Dense(10) 9 10 @tf.function(...) # Executes model as graph (optional args). 11 def __call__(self, x): 12 x = self.flatten(x) 13 for layer in self.layers: 14 x = layer(x) 15 x = self.dropout(x) 16 x = self.dense_2(x) 17 return x On line 10, AutoGraph used to potentially enhance performance. Decorates model’s call() method with @tf.function. At run-time, call()’s execution will be “traced” (∼9.22 speedup).
  • 11.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Hybridization Drawbacks Needs non-trivial, specialized metadata [Jeong et al., 2019]. Exhibit limitations and known issues with native program constructs. Subtle considerations required to: Specify (decorate) the functions to be migrated. Make code amenable to safe, accurate, and efficient graph execution. Avoid performance bottlenecks and semantically inequivalent results [Cao et al., 2022,Castro Vélez et al., 2022]. Manual analysis and refactoring (semantics-preserving, source-to-source transformation) for optimal results can be error- and omission-prone [Dig et al., 2009]. Further complicated by: Increasing Object-Orientation (OO) in DL model code (e.g.., Keras). Dynamically-typed languages (e.g., Python). Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 7 / 18
  • 12.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Imperative DL Code With Python Side-effects 1 @tf.function 2 def f(x): 3 print("Input: ", x) 4 f(1) 5 f(1) 6 f(2) Output (expecting 1, 1, 2): Input: 1 Input: 2 Side-effect producing, native Python statements, e.g., printing, list appending, global variable mutation, are problematic for tf.function-decorated functions (i.e., “tf.functions”). Because they are traced, a function’s behavior is “etched” into its corresponding graph. Can have unexpectant results, executing side-effects multiple times or not at all. Side-effects occur when tf.functions are called the first time. Subsequent calls with similar arguments execute the graph instead. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 8 / 18
  • 13.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Imperative (OO) DL Code With Python Side-effects 1 class Model(tf.Module): 2 def __init__(self): 3 self.v = tf.Variable(0) 4 self.counter = 0 5 6 @tf.function 7 def __call__(self): 8 if self.counter == 0: 9 self.counter += 1 10 self.v.assign_add(1) 11 return self.v 12 m = Model() 13 for n in range(3): 14 print(m().numpy()) Output (expecting 1, 1, 1): 1 2 3 A model uses a counter to safeguard a variable incrementation. The initial value of counter (line 4), however, is captured during tracing upon the first model invocation (line 14). Variable v is incremented unconditionally (line 10) each time the model is invoked. Such problems are common in migrating to graph execution. Can result in suspicious numerical results or lower performance. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
  • 14.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Imperative (OO) DL Code With Python Side-effects 1 class Model(tf.Module): 2 def __init__(self): 3 self.v = tf.Variable(0) 4 self.counter = 0 5 6 @tf.function 7 def __call__(self): 8 if self.counter == 0: 9 self.counter += 1 10 self.v.assign_add(1) 11 return self.v 12 m = Model() 13 for n in range(3): 14 print(m().numpy()) Output (expecting 1, 1, 1): 1 2 3 A model uses a counter to safeguard a variable incrementation. The initial value of counter (line 4), however, is captured during tracing upon the first model invocation (line 14). Variable v is incremented unconditionally (line 10) each time the model is invoked. Such problems are common in migrating to graph execution. Can result in suspicious numerical results or lower performance. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
  • 15.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Imperative (OO) DL Code With Python Side-effects 1 class Model(tf.Module): 2 def __init__(self): 3 self.v = tf.Variable(0) 4 self.counter = 0 5 6 @tf.function 7 def __call__(self): 8 if self.counter == 0: 9 self.counter += 1 10 self.v.assign_add(1) 11 return self.v 12 m = Model() 13 for n in range(3): 14 print(m().numpy()) Output (expecting 1, 1, 1): 1 2 3 A model uses a counter to safeguard a variable incrementation. The initial value of counter (line 4), however, is captured during tracing upon the first model invocation (line 14). Variable v is incremented unconditionally (line 10) each time the model is invoked. Such problems are common in migrating to graph execution. Can result in suspicious numerical results or lower performance. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
  • 16.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Imperative (OO) DL Code With Python Side-effects 1 class Model(tf.Module): 2 def __init__(self): 3 self.v = tf.Variable(0) 4 self.counter = 0 5 6 @tf.function 7 def __call__(self): 8 if self.counter == 0: 9 self.counter += 1 10 self.v.assign_add(1) 11 return self.v 12 m = Model() 13 for n in range(3): 14 print(m().numpy()) Output (expecting 1, 1, 1): 1 2 3 A model uses a counter to safeguard a variable incrementation. The initial value of counter (line 4), however, is captured during tracing upon the first model invocation (line 14). Variable v is incremented unconditionally (line 10) each time the model is invoked. Such problems are common in migrating to graph execution. Can result in suspicious numerical results or lower performance. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
  • 17.
    Introduction Motivation ImplementationEvaluation Conc. Drawbacks Imperative (OO) DL Code With Python Side-effects 1 class Model(tf.Module): 2 def __init__(self): 3 self.v = tf.Variable(0) 4 self.counter = 0 5 6 @tf.function 7 def __call__(self): 8 if self.counter == 0: 9 self.counter += 1 10 self.v.assign_add(1) 11 return self.v 12 m = Model() 13 for n in range(3): 14 print(m().numpy()) Output (expecting 1, 1, 1): 1 2 3 A model uses a counter to safeguard a variable incrementation. The initial value of counter (line 4), however, is captured during tracing upon the first model invocation (line 14). Variable v is incremented unconditionally (line 10) each time the model is invoked. Such problems are common in migrating to graph execution. Can result in suspicious numerical results or lower performance. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 9 / 18
  • 18.
    Introduction Motivation ImplementationEvaluation Conc. Insight Refactorings Approach Problem Insight Although imperative DL code is sequentially executed, hybridizing code resembles parallelizing sequential code. Example To void unexpected behavior, like concurrent programs, hybrid functions should avoid side-effects. Idea Adapt concepts from automated refactorings that parallelize sequential code, e.g., Streaming APIs [Khatchadourian et al., 2019]. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 10 / 18
  • 19.
    Introduction Motivation ImplementationEvaluation Conc. Insight Refactorings Approach Refactorings Two new, fully-automated refactorings: Convert Eager Function to Hybrid Transforms otherwise eagerly-executed imperative (Python) DL code for enhanced run-time performance. Automatically specifies (decorates) whether and how code could be reliably and efficiently executed as graphs at run-time. Avoids hybridizing code under certain conditions (e.g., side-effecting code) to preserve semantics. Optimize Hybrid Function Transforms code already running as graphs for optimal run-time performance. Possibly dehybridize code when eager execution could be faster (e.g., graph “retracing”). Issues refactoring “warnings” when hybrid code may have unexpected results but refactoring is not possible to due semantics preservation. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 11 / 18
  • 20.
    Approach Highlights Novel tensoranalysis for imperative DL code. Current analyzers work on only procedural (TF 1) code. Modernization of WALA Ariadne [Dolby et al., 2018] for imperative (TF 2) code. Implemented as a PyDev Eclipse IDE plug-in [Zadrozny, 2023]. Integrates Ariadne for tensor type inference analysis. Leverages complementary speculative analysis [Zhou et al., 2020] using contextual DL keywords for difficult static cases.
  • 21.
    Architecture & Dependencies Eclipseleveraged for its refactoring framework and test engine [Bäumer et al., 2001]. PyDev used for efficient indexing, refactoring support, and that it is open-source for all Python development. WALA used for static analyses (ModRef) used to build our side-effect analysis. WALA Ariadne used for Python analysis, tensor type inference, and (TensorFlow) library modeling.
  • 22.
    Figure: Screenshot ofthe Hybridize Functions refactoring preview wizard.
  • 23.
    Introduction Motivation ImplementationEvaluation Conc. Insight Refactorings Approach Challenges Addressed Reworked much of the existing Java (JDT) refactoring tooling to work with Python. Integrated Ariadne with PyDev due to its excellent and long-lived refactoring support for Python, including refactoring preview pane, element GUI selection, and refactoring undo history. Augmented Ariadne to analyze imperative Deep Learning (Python) code by expanding XML summaries to support TensorFlow 2 APIs. Added support for Python constructs commonly used in modern imperative DL programs. Correlated varying intermediate representations (IRs) with the original Python source code for transformation. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 15 / 18
  • 24.
    Introduction Motivation ImplementationEvaluation Conc. Insight Refactorings Approach Modernizing Ariadne: New Enhancements Python module packages. Wild card imports. Intra-package references (relative imports; from .. import X). Package initialization scripts. Automatic unit test entry points discovery. Non-scalar tensor dataset [Google LLC, 2023] iteration. Modeling of additional libraries. Static and class methods analysis. Analysis of custom decorators. Callable object (functor) analysis (used in Keras). Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 16 / 18
  • 25.
    Evaluation Summary Analyzed 19open-source Python imperative DL systems. Varying size and domain. Ranging from 0.12 to 36.72 KSLOC. Refactored 42.56% of 766 functions despite conservatism. Run-time Performance Evaluation Summary Measured an average relative model training speedup of 2.16. Memory consumption measurement pending. Differences in model accuracy and loss before and after refactoring were negligible.
  • 26.
    Introduction Motivation ImplementationEvaluation Conc. Conclusion Imperative DL code is easier to debug, write, and maintain. Comes at the expense of (run-time) performance. Hybridization bridges the gap between eager and graph execution. Optimal performance and semantics preservation is non-trivial. Our Work Open-source, automated refactoring PyDev Eclipse plug-in that assists developers with writing optimal imperative DL Python code. Integrates an Eclipse refactoring with WALA Ariadne static analyses. Future Work More advanced container-based analyses. Automatically split functions. First-class hybrid functions. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
  • 27.
    Introduction Motivation ImplementationEvaluation Conc. For Further Reading I Abadi, Martín et al. (2016). “TensorFlow: A System for Large-Scale Machine Learning”. In: Symposium on Operating Systems Design and Implementation. Agrawal, Akshay et al. (2019). TensorFlow Eager: A Multi-Stage, Python-Embedded DSL for Machine Learning. arXiv: 1903.01855 [cs.PL]. Apache (Apr. 8, 2021). Hybridize. Apache MXNet documentation. url: https://mxnet.apache.org/versions/1.8.0/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html (visited on 04/08/2021). Arpteg, A., B. Brinne, L. Crnkovic-Friis, and J. Bosch (2018). “Software Engineering Challenges of Deep Learning”. In: Euromicro Conference on Software Engineering and Advanced Applications. IEEE, pp. 50–59. doi: 10.1109/SEAA.2018.00018. Bäumer, Dirk, Erich Gamma, and Adam Kiezun (Oct. 2001). “Integrating refactoring support into a Java development tool”. url: http://people.csail.mit.edu/akiezun/companion.pdf (visited on 09/10/2024). Cao, Junming, Bihuan Chen, Chao Sun, Longjie Hu, Shuaihong Wu, and Xin Peng (2022). “Understanding Performance Problems in Deep Learning Systems”. In: FSE. FSE ’22. ACM, pp. 357–369. doi: 10.1145/3540250.3549123. Castro Vélez, Tatiana, Raffi Khatchadourian, Mehdi Bagherzadeh, and Anita Raja (May 2022). “Challenges in Migrating Imperative Deep Learning Programs to Graph Execution: An Empirical Study”. In: MSR. MSR ’22. ACM/IEEE. ACM. doi: 10.1145/3524842.3528455. Chen, Tianqi, Mu Li, Yutian Li, Min Lin, Naiyan Wang, Minjie Wang, Tianjun Xiao, Bing Xu, Chiyuan Zhang, and Zheng Zhang (2015). “MXNet: A Flexible and Efficient Machine Learning Library for Heterogeneous Distributed Systems”. In: Workshop on Machine Learning Systems at NIPS. arXiv: 1512.01274 [cs.DC]. Chollet, François (2020). Deep Learning with Python. 2nd ed. Manning. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
  • 28.
    Introduction Motivation ImplementationEvaluation Conc. For Further Reading II Dig, Danny, John Marrero, and Michael D. Ernst (2009). “Refactoring sequential Java code for concurrency via concurrent libraries”. In: ICSE, pp. 397–407. doi: 10.1109/ICSE.2009.5070539. Dilhara, Malinda, Ameya Ketkar, Nikhith Sannidhi, and Danny Dig (2022). “Discovering Repetitive Code Changes in Python ML Systems”. In: ICSE. ICSE ’22. Dolby, Julian, Avraham Shinnar, Allison Allain, and Jenna Reinen (2018). “Ariadne. Analysis for Machine Learning Programs”. In: MAPL. ACM SIGPLAN. ACM, pp. 1–10. doi: 10.1145/3211346.3211349. Eclipse Foundation (June 2024). Eclipse IDE. url: https://eclipseide.org/ (visited on 09/10/2024). Facebook Inc. (2019). PyTorch. TorchScript. en. url: https://pytorch.org/docs/stable/jit.html (visited on 02/19/2021). Google LLC (Mar. 17, 2023). tf.data.Dataset. TensorFlow. Version 2.9.3. url: https://www.tensorflow.org/versions/r2.9/api_docs/python/tf/data/Dataset (visited on 12/15/2023). Jeong, Eunji, Sungwoo Cho, Gyeong-In Yu, Joo Seong Jeong, Dong-Jin Shin, Taebum Kim, and Byung-Gon Chun (July 2019). “Speculative Symbolic Graph Execution of Imperative Deep Learning Programs”. In: SIGOPS Oper. Syst. Rev. 53.1, pp. 26–33. issn: 0163-5980. doi: 10.1145/3352020.3352025. Khatchadourian, Raffi, Yiming Tang, Mehdi Bagherzadeh, and Syed Ahmed (2019). “Safe Automated Refactoring for Intelligent Parallelization of Java 8 Streams”. In: ICSE. ICSE ’19. IEEE Press, pp. 619–630. doi: 10.1109/ICSE.2019.00072. Kim, Miryung, Thomas Zimmermann, and Nachiappan Nagappan (Nov. 2012). “A Field Study of Refactoring Challenges and Benefits”. In: FSE. ACM. doi: 10.1145/2393596.2393655. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
  • 29.
    Introduction Motivation ImplementationEvaluation Conc. For Further Reading III Moldovan, Dan, James M. Decker, Fei Wang, Andrew A. Johnson, Brian K. Lee, Zachary Nado, D. Sculley, Tiark Rompf, and Alexander B. Wiltschko (2019). AutoGraph: Imperative-style Coding with Graph-based Performance. arXiv: 1810.08061 [cs.PL]. Negara, Stas, Nicholas Chen, Mohsen Vakilian, Ralph E. Johnson, and Danny Dig (2013). “A Comparative Study of Manual and Automated Refactorings”. In: ECOOP. Ed. by Giuseppe Castagna. Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 552–576. isbn: 978-3-642-39038-8. OpenAI, Inc. (Aug. 18, 2023). ChatGPT. url: https://chat.openai.com (visited on 08/18/2023). Paszke, Adam et al. (Dec. 3, 2019). PyTorch: An Imperative Style, High-Performance Deep Learning Library. arXiv: 1912.01703 [cs.LG]. WALA (Sept. 8, 2024). T.J. Watson Libraries for Analysis. original-date: 2012-04-05T18:57:03Z. url: https://github.com/wala/WALA (visited on 09/10/2024). Zadrozny, Fabio (Apr. 15, 2023). PyDev. url: https://www.pydev.org (visited on 05/31/2023). Zhou, Weijie, Yue Zhao, Guoqiang Zhang, and Xipeng Shen (2020). “HARP: Holistic Analysis for Refactoring Python-Based Analytics Programs”. In: ICSE. doi: 10.1145/3377811.3380434. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 18 / 18
  • 30.
    Appendix Static AnalysisRefactoring LLMs Notebooks Appendix Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 1 / 6
  • 31.
    Appendix Static AnalysisRefactoring LLMs Notebooks Why Static Analysis? Refactorings must operate on (at least some) static information. Must eventually transform the source code. May eventually integrate hybrid analyses to resolve difficult static cases. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 2 / 6
  • 32.
    Appendix Static AnalysisRefactoring LLMs Notebooks Why Automated Refactoring? In general, such problems may also be handled by compilers or runtimes; however, refactoring has several benefits: Gives developers more control over where the optimizations take place and making graph execution explicit. Can be issued multiple times, e.g., prior to major releases. Unlike static checkers, they transform source code, a task that can be otherwise error-prone and involve subtle nuances. Refactorings can act like recommendation systems, which is important for analyzing and transforming programs written in dynamic languages where static assumptions may be easily violated! Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 3 / 6
  • 33.
    Appendix Static AnalysisRefactoring LLMs Notebooks Refactoring Developer Adoption Developers generally underuse automated refactorings [Kim et al., 2012,Negara et al., 2013]. Data scientists and engineers may be more open to using automated (refactoring) tools. Our approach will be fully automated with minimal barrier to entry. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 4 / 6
  • 34.
    Appendix Static AnalysisRefactoring LLMs Notebooks LLMs & Big Data Refactoring LLMs [OpenAI, Inc., 2023] can also perform refactorings. Other Big Data-driven refactorings [Dilhara et al., 2022] are exciting and promising. Obtaining a (correct) dataset large enough to automatically extract the proposed refactorings is challenging as developers struggle with (manually) migrating DL code to graph execution [Castro Vélez et al., 2022]. LLM inference capabilities are currently limited. LLMs have a token limitation. Hybridization requires interprocedural analysis. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 5 / 6
  • 35.
    Appendix Static AnalysisRefactoring LLMs Notebooks Notebook Support We plan to investigate notebook support in the future. We envision the approach to be used on (larger) DL systems, consisting of multiple files. Khatchadourian, Castro Vélez, Bagherzadeh, Jia, Raja Hybridize Functions Imperative DL Refactoring 6 / 6