Resources for Prospective Students
(Last updated on Apr 8, 2024)
I am broadly interested in topics related to programming languages, especially formal verification, program analysis, and probabilistic programming. I am currently working on resourcesafe system programming, programmable Bayesian inference, quantitative program analysis, and prooforiented programming languages.
 If you want to learn about possible projects and research directions that I would like to advise or collaborate on at the moment, check out the list below.
 If you want to know more about my longterm perspectives of incorporating randomness in software, check out my research statement that I wrote for my faculty applications.
 If you care about teaching and mentoring, check out my teaching statement.
 If you have concrete questions about me or Peking University, check out the Q&A below, and send me an email if your question is not covered there.
Q&A
Are you recruiting students?
Do I need to have a specific research background? Do I need to know a specific problem to work on?
Can I do machine learning / operating systems / database / ...?
Why should I consider Peking University? Who else could I work with at Peking University?
You did not reply to my email!
Current Research Directions
I always welcome new collaborators on the following (incomprehensive) list of research topics. Please contact me if you are interested in working on one of those directions.
ResourceSafe System Programming
Resource usage (e.g., time, memory, and energy) of a program is one of the central subjects of computer science. However, resource usage usually does not play a central role in programming language theory such as formal semantics, static analysis, type systems, and program logics. I am generally interested in most resourceanalysis projects, including both verification (e.g., of software implemented in an existing programming language) and language design (e.g., for easier and more precise resource analysis).
Analysis of resource usage becomes increasingly critical for system programming. The recent success of Rust indicates that modern system programming languages should focus on static safety guarantees, such as memory safety and thread safety. There are a lot of programming languages that do provide such guarantees, but mostly via runtime mechanisms (such as garbage collection), and those mechanisms result in unsatisfactory performance, especially in system software. Rust, by proposing a memorysafety type system, reduces runtime resource consumption and thus achieves high performance. The performance of system software does not only involve memory management, though. I am interested in proposing a paradigm of resourcesafe programming, where resource consumption of programs are statically verified to meet developers’ expectation, and the programs do not exhibit multiple kinds of resourcerelated vulnerabilities, such as algorithmic complexity attacks and sidechannel attacks. Below is a sample of medium to longterm projects of resourcesafe system programming:
 Resource analysis of Rust: Can the finegrained memory model improve the precision and efficiency of resource analysis?
 Resource analysis of concurrent programs: Rust has limited support for static threadsafety guarantees, e.g., Rust does not statically rule out deadlocks. Can we incorporate finegrained concurrency model in system programming and extend resource analysis to support it?
 Targetlevel resource analysis: How can we analyze compiled programs with respect to lowlevel resource metrics, e.g., the number of clock cycles? The analysis needs to be hardwarespecific because we need to take cache, pipeline, branch prediction, etc. into consideration.
 Discovery of resourcerelated security vulnerabilities: Can we detect algorithmiccomplexityattack bugs and/or sidechannelattack bugs? In addition, if such bugs exist, can we automatically synthesize an attacker to trigger the bugs?
At the moment, my favored approach is Automatic Amortized Resource Analysis (AARA^{1}) but I am open to other approaches such as recurrence solving, sized types, ranking functions, symbolic resource analysis, etc.
Programmable Bayesian Inference
In contrast to frequentist methods like deep learning, Bayesian learning accounts for the probability distribution of hypotheses that produce the observed data and thus naturally quantifies the uncertainty that is present in the learned models. In addition, Bayesian learning enables the straightforward incorporation of domain knowledge and generalizes well to unseen data. These advantages of Bayesian learning are realized by probabilistic programming languages (PPLs) like Stan and Pyro, which provide an interface that separates model development from various builtin Bayesian inference algorithms.
The main downside of Bayesian learning is that probabilistic inference is computationally hard and not even the most advanced inference algorithms work well for all models. To make probabilistic inference for Bayesian learning feasible for more models and larger data sets, some PPLs have there forgone the strict separation of modeling and inference by enabling users to customize specific Bayesian inference algorithms through socalled programmable inference^{2}. However, it is all too easy for users to incorrectly program inference in a way that breaks convergence and leads to unsound learned models. Such a mistake can even go unnoticed.
I am interested in dissipating the tension between soundness and flexibility of probabilistic inference by applying programming language techniques such as type systems and static analysis. The vision is that new programming abstractions and type systems ensure the soundness of programmable inference while static analysis and program synthesis assist users by automating the customization of inference with to goal of improving efficiency. Our PLDI paper is a concrete demonstration of this research direction.
Quantitative Program Analysis
Randomness in programs can show up in two ways: external randomness (e.g., uncertainty from the environment) and internal randomness (e.g., randomized algorithms). As Kleene algebras become an algebraic foundation for analysis of nonprobabilistic programs^{3}, I am interested in developing an algebraic framework for analysis of probabilistic programs. To this end, we need to formulate an algebraic semantic framework of probabilistic programs, and develop a generic algorithm to solve programanalysis instances for probabilistic programs.
One fundamental difference between nonprobabilistic and probabilistic programs is that an execution of a nonprobabilistic program is a chain, but an execution of a probabilistic program is a tree. To see the point, consider that our program model allows nondeterminism. A nonprobabilistic program then corresponds to a collection of possible execution chains, but a probabilistic program should be interpreted as a collection of distributions, each of which can be encoded as an execution tree, where each rooted path (annotated with a probability) is a concrete run of the program. See our MFPS paper for a more formal presentation based on hyper graphs.
Abstract interpretation is a powerful framework for describing and solving programanalysis instances. Researchers have proposed generic solving strategies, such as chaotic iteration, widening, narrowing, etc. However, the iterative solving algorithms do not fit into the quantitative nature of probabilistic programs. Suppose we want to analyze a probabilistic loop “$\mathbf{while}~(\mathbf{prob}(3/4))~\{ x = x + 1; \}$.” The expected delta of the variable $x$ is the least solution to the equation $r = f(r) \equiv (1/4) \cdot 0+(3/4) \cdot (r+1)$, and we can analytically solve it directly: $r = 3$. However, an iterative solving strategy would approach $r$ by a sequence of approximations: $\{ f^i(0) \}_{i \in \mathbb{N}}$, which will not converge in any finite number of iterations. I am interested in adapting Newtonian Program Analysis^{4} to the probabilistic setting, with the observation that Newton’s method is more suitable for such a quantitative setting.
Moreover, the proposed algebraic framework of reasoning about probabilistic programs should, in principle, be also suitable for reasoning about other quantitative properties, such as resource consumption. It would be interesting be see if this observation leads to a more general quantitative program analysis framework. A recent paper by Batz et al. proposed a notion of weighted programming^{5}, which further generalizes the quantitative properties. Applications of the proposed quantitative program analysis framework include technology transfer between probabilistic analysis and resource analysis; for example, MaximumaPosteriori (MAP) estimation has a very similar problem statement to WorstCase Analysis (WCA), so it would be interesting to use the observation to build new statistical resource analysis techniques.
Other Projects
I am always open to new directions, including interdisciplinary ones. Below is a sample of initial ideas that I am interested in exploring in the short (or medium) term. The list is not complete, and be aware that most of the thoughts are not matured at all; in fact, I would expect some of them will quickly lead to negative results.
Static Analysis
Library Summarization via Tensor Products
Problem: Library summarization is an effective way to accelerate the analysis of client code. However, information about the client is unknown at the library summarization, preventing complete summarization of the library. A stateoftheart approach is Conditional DyckCFL Reachability Analysis (ConCRA), which targets graphreachabilitylike static analyses and uses hypothetical summaries.
Proposal: Lal et al.^{6} proposed a tensorproduct principle: Tensor products with an appropriate detensor operation allow computations to be rearranged in certain ways; for example, they can be used to delay a multiplication in a chain of multiplications. This principle may allow us to generalize, in the summarization framework, DyckCFLreachabilitybased analyses to algebraic semiringbased analysis.
Followup: The library summarization problem can be generalized to partially solving an equation system for a static analysis. Any thoughts on partial evaluation?
NPATP for Nonidempotent Semirings
Problem: Esparza et al.^{4} generalized Newton’s method to a method for finding fixedpoints of systems of equations over semirings, leading to a new algorithm to solve interprocedural dataflow analysis. Reps et al.^{7} developed NPATP, which extended Esparza’s framework with an improved algorithm for solving linearcontextfreelanguage (LCFL) subproblems. However, NPATP assumes the underlying abstract domain admits an idempotent semiring; such an assumption rules out interesting analysis domains, especially for numerical properties, e.g., the reaching probability.
Proposal: Esparza’s original framework does support nonidempotent semirings, so it would be a good starting point. The NPATP framework features tensor products to solve LCFL subproblems, and from the perspective of algebras, the principle of tensor products does not require idempotence.
Followup: Some recent work shows that quantum computation also needs nonidempotent semirings. Can the idea of NPATP be generalized to analyze quantum programs?
Complicated Probability Manipulation
Problem: In many programs, the manipulation of probability is very complicated and even manual analysis of their correctness is difficult. Consider the problem below:
x = p;
while (prob(0.5)) {
x = 2 * x;
if (x >= 1) {
x = x  1;
}
}
We want to verify that at the end of the loop, the probability of the event $x \ge \frac{1}{2}$ is exactly $p$.
Proposal: Probabilitygenerating functions provide a succinct way to describe probability distributions of discrete random variables. We might try to derive the probabilitygenerating function of a probabilistic program to reason about the resulting distribution of the program. A recent paper^{8} is on this direction.
Followup: There is a connection between derivatives of the probabilitygenerating function of a random variable $X$ and the moments of $X$. Can we use the fact to perform moment analysis?
Finegrained Heap Abstraction via Regular Patterns
Problem: Static analysis of heapmanipulation programs often involves heap sensitivity. In the simplest case, all memory allocations with the same program location are treated as a single allocation (thus in a contextinsensitive way). Advanced techniques include threevaluelogic for shape analysis, $k$CFA, $m$CFA^{9}, etc. However, they seem insufficient when we want to analyze recursive programs. (Imagine that in functional programming, iterating over a list is usually implemented as a recursive function.)
Proposal: We can treat the set of all callstrings as a language, then different strategies for handling sensitivity are essentially different ways of defining equivalence classes on the language. A presumably good proposal is to use regular patterns to classify equivalence classes. For example, $(ff)^*$ indicates calling a function $f$ for an even number of times, $f(ff)^*$ for an odd number.
Followup: In fact, this approach should generalize to all sensitivities that use call strings. Is it meaningful to develop a general framework with regularpatternsensitivity?
Formal Semantics
Combination of Continuous Distributions and Nondeterminismfirst
Problem: Nondeterminismfirst means that we resolve nondeterminism prior to program inputs when defining denotational semantics. For example, suppose that a deterministic function has the signature $A \to B$, then a standard resolution of nondeterminism gives us $A \to \wp(B)$, but nondeterminismfirst suggests $\wp(A \to B)$. Nondeterminismfirst can be useful for e.g., compiletime nondeterminism. A recent study proposed a denotational semantics for probabilistic programs that combines discrete distributions (on a countable state space) and nondeterminismfirst. However, it remains open if the semantics can be extended to support continuous distributions.
Proposal: In fact, I do not have any ideas yet.
Followup: What about a computational theory for such a combination?
Combination of Probability and Messagepassing Concurrency
Problem: The common treatment of a probabilistic flip in the semantics is to “split” the current program configuration into two, one for the “world” where the flip shows heads and the other for tails. When we need to deal with multiple concurrent processes, such a treatment means a local flip in a process results in duplicating all other processes. Such a nonlocal (thus noncompositional) behavior has been shown to be problematic for proving soundness of formal methods (e.g., type systems, which are usually compositional).
Proposal: The result of a probabilistic flip should remain as local as possible. We have recently developed an operational semantics with the desirable locality, but I feel that a more algebraic representation is better. For example, we might want to rearrange the operational semantics to reduction rules and congruence rules. Furthermore, we should prove an equivalence result to the ordinary probabilistic semantics (e.g., Markov chains).
Followup: I can always imagine that adding nondeterminism causes a lot of troubles.
Building a Framework for Nondeterminismasrefinement
Problem: Nondeterminism is often referred to as the basis of abstraction and refinement in the setting of stepwise developement^{10}. However, such a pattern has not been popular in modern programming languages. What is the main obstable for developing such a framework? Also, people have developed many kinds of semantics for nondeterminism. Which semantics should be the most suitable one for refinementbased programming and reasoning?
Proposal: I guess this has something to do with relational Hoare Logics^{11}. At least they provide a way to use nondeterminism in a formal framework of proving program properties.
Followup: How to devise a language design for it?
Type Systems
Safe Interaction between Typed Domain Specific Languages
Problem: Consider we have a typed programming language that is suitable for developing embedded Domain Specific Languages (eDSLs). Every eDSL can directly use the base type system from the host language, but it can also have its own advanced domainspecific type system. Languages such as MetaOCaml have already supported such a mechanism, but the interaction between the host language and the eDSL, or maybe even between different eDSLs, is not very convenient yet. It would be nice to have a simple and safe interaction scheme among eDSLs on a same host language.
Proposal: In fact, I do not have any ideas yet.
Followup: What else should we have for such a flexible eDSL framework? For example, we might want to consider multistage compilation, debugger, mixed paradigm, etc.
Lightweight Shape Check for Array Programming
Problem: Array programming (or tensor programming) is pervasive nowadays because of machine learning. Many machinelearning frameworks lack the ability to statically check tensor shapes are always consistent during program execution. Approaches based on dependent/refinement/indexed types all seem too heavy to become practical.
Proposal: In many (really?) cases, when the input of an array program is fixed, the shapes of all intermediate arrays are also fixed. So we can perform a static shape check after the input is given, but before we execute the program, by propagating the concrete shape information from the given input.
Followup: Is it possible to automatically learn the shaperelated effects of library functions (e.g., broadcast_tensor
)?
Integration of Refinement Types and Separation Logic
Problem: Recently, people have developed verification systems, e.g., CN^{12} and RefinedC^{13}, which integrate refinement types and separtion logic to reason about lowlevel programs. In those systems, refinements seem to still encode purely functional properties (really?). How about enabling type refinements to be separationlogic propositions? To that end, such refined types are not structural any more and we need to work with substructural refinemend types. Does the integration make sense?
Proposal: It seems that we can start from linear refinement types.
Followup: Linear typing and separation logic share a lot of similarities. Are both rooted back in linear logic? If not, what is the fundamental difference?
Liquidtypestyle Dijkstra Monads
Problem: Dijkstra monads^{14} were originally proposed to automate the verificationconditiongeneration (VCG) routine for analyzing effectful higherorder programs. They later form the basis of F*, a prooforiented programming language^{15}. One feature of F* is that it supports proofirrelevant refinement types. Liquid types^{16} provide another method of balancing between automatic type inference and dependent typing. One main idea of liquid types is to separately design a language for specification. Can we combine Dijkstra monads and liquid types? What are the benefits—if there are—to do this?
Proposal: So the system is basically a comination of liquidstyle language design and an F*style type inference.
Followup: Dijkstra monads support multimonadic effects in a systematic way. This means we might have a method to extend our ICFP paper to include the value interpretation as in the language itself.
Resource Analysis
Fraction Bounds for Expected Cost Analysis
Problem: Most of current resourceanalysis techniques derive linear, polynomial,
or exponential bounds. However, when analyzing probabilistic programs, we often
encounter bounds that involve fractions. For example, the simple loop while (prob(p)) tick(1);
has an expected cost of $\frac{p}{1p}$, where $p$ is a program variable that denotes
a nontrivial probability.
Proposal: In many cases, a fraction bound can be expressed as a polynomial over another polynomial. So it should be possible to extend current templatebased resourceanalysis techniques (e.g., AARA) that derive polynomial bounds, by using a template for the denominator to reduce fractionbound inference to polynomialbound inference.
Followup: A random thought: how can we apply such resourceanalysis technique to Bayesian inference (like a recent paper by Beutner et al.^{17})?
AARA for Probabilistic Functional Programs with Nonmonotone Resources
Problem: Our ICFP paper presents an extension of AARA that infers upper bounds on the expected cost of probabilistic functional programs with monotone resources (such as time). In general, combining probability and nonmonotone resources (such as memory) is unreasonably complex^{18}. Applying optional stopping theorems, people have successfully developed techniques for analyzing arithmetic programs. It would be interesting to see if the whole methodology applies to functional programming, where we have inductive data structures (that can hold potential) instead of numbers.
Proposal: One version of optional stopping theorems states that the expected change
in one evaluation step should be “bounded.” This should intuitively hold for
functional programming (I mean, in most cases) because one step should change the
size of a data structure by at most one (consider the case of cons
).
By constructing a suitable Markov chain, we might be able to reason about
probabilistic functional programs with nonmonotone resources.
Followup: Optional stopping theorems always talk about “expected change” and “expected termination time.” Why them?!
Newtonian Resource Analysis
Problem: Apply the Newtonian Program Analysis (NPA) framework^{4} and its extension^{7} to carray out resource analysis. Existing work has shown the possibility to perform recurrencebased analysis^{19}. How about other kinds of resource analysis, like AARA?
Proposal: Because amortized analysis is naturally twovocabulary, it seems possible to recast AARA as a twovocabulary abstract domain. The gap is that AARA is constraintbased, which differs from the general NPA framework.
Followup: NPA basically provides a framework for analyzing nonlinearrecursive programs by iteratively analyzing linearrecursive programs. Can we combine this idea to iteratively use AARA to solve linear recursion?
Worstcase Lowerbound Resource Analysis
Problem: Most work on resource analysis focuses on worstcase upper bounds, but in practice people also want to know worstcase lower bounds, i.e., whether there exists a realizable execution path leading to the bound. This could usually be done by testing techniques for worstcase analysis, e.g., fuzzing^{20}. Recent advances on incorrectness logic^{21} and coverage typing^{22} show the power of suhc realizability analysis. We want to follow those lines of work to build type systems or program logics to reason about worstcase lower bounds.
Proposal: From the perspective of incorrectness logic, one can extend separation logic with time credits. From the perspective of coverage typing, one can extend liquid resource types.
Followup: Is it possible to integrate such deductive analysis with fuzzingbased worstcase analysis?
Probabilistic Programming
Domainspecific Procedural Design
Problem: Daniel Ritchie suggested generative probabilistic programming for procedural modeling and design (the use of random programs to generate visual content with respect to aesthetic or functional constraints). Such an approach has been rediscovered in several individual developments, such as Scenic^{23} and Picture^{24}. So instead of separate developments, we want to develop a framework to ease such domainspecific procedural design.
Proposal: This has something to do with the eDSL framework.
Followup: It might be interesting to see how such static procedural design evolves to dynamic procedural design (e.g., synthesis of an environment). Another direction is interactive refinement (e.g., incorporating feedbacks from users) of the generated designs.
Language Design for Sound Involutive MCMC
Problem: A recent paper by CusumanoTowner et al.^{25} studies how to ease the burden of implementing involutive MCMC by combining probabilistic programming and differentiable programming. It still remains a challenge (emm, you have to double check) to statically verify the correctness of a concrete implementation in their framework.
Proposal: I would try to combine type systems for sound programmable inference and sound differentiable programming.
Followup: I am thinking of The Next XXX MCMC Algorithms.
Quantum Probability and Bayesian Inference
Problem: Quantum physics uses a form of complex probability. People have been thinking of it as a serious probability theory (see this and this). There are even quantum analogs of Bayesian inference (one and the other). Does it make sense to incorporate some of those ideas in current probabilistic programming systems?
Proposal: I still have to learn quantum physics (emm, not really).
Followup: Some people are trying to add quantum features to standard Bayesian models (check out this).
Program Synthesis
Interface Synthesis for Database Tables
Problem: One central function of lowcode programming platforms is to automatically generate usable interface for manipulating database tables. In an application, the developers might want to customize a specific way to organize the interface, and they want to reuse the customization across the application.
Proposal: The customization can be thought as a functor that takes a databasetable module as its input and outputs an inference module for that table. For lowcode programming, it would be useful to have a mechanism where the developers customize a table interactively and the platform automatically synthesize the code of the customization functor.
Followup: Lowcode programming involves many UIrelated customization. Is there a more general principle? For example, it might be interesting to investigate Natural Programming.
Resourceguided Synthesis and Implicit Computational Complexity
Problem: Recently, resourceguided synthesis and algorithm synthesis have received considerable attention^{26} ^{27}. We also had published a related PLDI paper. Those approaches can be categorized into two sorts: one develops a complex type system of program logic to enforce resource guarantees, the other proposes algorithmic templates that ensure resource guarantees. The latter is somewhat related to Implicit Computational Complexity (ICC), which characterizes algorithms by constraints on the way in which they are constructed. In other words, an ICC research usually provides a nonstandard programming language, in which every legal program is guaranteed to have certain complexity. Can we generalize the templatebased approach to ICCbased approach?
Proposal: A starting point could be nonsizeincreasing computaion^{28}.
Followup: ICC seems to be not so finegrained, e.g., usually researchers focus on whether the complexity is polynomial or not. How to make them finergrained? For example, would it be possible to develop an ICC class for $O(n \log n)$ algorithms?
Program Verification
Verifying Time/Memory Complexities in VST
Problem: Verified Software Toolchain (VST) is a set of verified tools that formally verify the functional correctness of C programs using Hoare logic and separation logic. I wonder if it is also convenient to verify resourceusage properties of C programs, like their time or memory complexities.
Proposal: It has been shown that separation logic naturally supports resource verification based on time credits^{29}. So conceptually it should be possible to have that in VST.
Followup: Can we develop some general reasoning libraries for resource verification?
Backward Proving in Coq
Problem: Coq proofs are organized in a forward, tacticbased manner, which is not very readable. In contrast, Isabelle (and some other theorem provers) provides a backward mode, where the intermediate goals are stated explicitly, so that improves readability. Is it possible to use a backward pattern in Coq proofs? If not, would it be possible to have a backwardproof DSL that can be translated to Coq?
Proposal: Isabelle’s Isar might be a good starting point.
Followup: What is the best proof interface for system programmers?
Refinementbased System Programming
Problem: Many verification projects (e.g., seL4) start from a highlevel specification implemented in a functional language (e.g., Haskell) and then prove a concrete implementation in a system programming language (e.g., C) refine the specification. From the perspective of software engineering, this suggests another way of system programming: first implement a functional specification and then (iteratively) refine it to build a concrete lowlevel implementation.
Proposal: Build a language where the highlevel and lowlevel languages can interoperate seamlessly, e.g. Haskell and C. Existing solutions are somehow quite complex, in the sense that they want to interoperate with a realistic C compiler directly. However, it would be nice enough for the development process if the C part is just supported partially, but with better support, e.g., to be equipped with an interpreter.
Followup: What is the best practice for multilanguage (i.e., multilingual) software engineering?

Jan Hoffmann and Steffen Jost. Two Decades of Automatic Amortized Resource Analysis. Math. Struct. Comput. Sci. (2022). DOI ↩︎

Vikash K. Mansinghka, Ulrich Schaechtle, Shivam Handa, Alexey Radul, Yutian Chen, and Martin Rinard. Probabilistic Programming with Programmable Inference. Prog. Lang. Design and Impl. (2018). DOI ↩︎

Zachary Kincaid, Thomas Reps, and John Cyphert. Algebraic Program Analysis. DOI ↩︎

Javier Esparza, Stefan Kiefer, and Michael Luttenberger. Newtonian Program Analysis. J. ACM (2010). DOI ↩︎ ↩︎ ↩︎

Kevin Batz, Adrian Gallus, Benjamin Lucien Kaminski, JoostPieter Katoen, and Tobias Winkler. Weighted Programming: A Programming Paradigm for Specifying Mathematical Models. ObjectOriented Prog., Syst., Lang., and Applications (2022). DOI ↩︎

Akash Lal, Tayssir Touili, Nicholas Kidd, and Thomas Reps. Interprocedural Analysis of Concurrent Programs Under a Context Bound. Tools and Algor. for the Constr. and Anal. of Syst. (2008). DOI ↩︎

Thomas Reps, Emma Turetsky, and Prathmesh Prabhu. Newtonian Program Analysis via Tensor Product. Princ. of Prog. Lang. (2016). DOI ↩︎ ↩︎

Mingshuai Chen, JoostPieter Katoen, Lutz Klinkenberg, and Tobias Winkler. Does a Program Yield the Right Distribution?. Computer Aided Verif. (2022). DOI ↩︎

Matthew Might, Yannis Smaragdakis, and David Van Horn. Resolving and Exploiting the $k$CFA Paradox: Illuminating Functional vs. ObjectOriented Program Analysis. Prog. Lang. Design and Impl. (2010). DOI ↩︎

Edsger W. Dijkstra. A Discipline of Programming. Prentice Hall PTR (1997). DOI ↩︎

Nick Benton. Simple Relational Correctness Proofs for Static Analyses and Program Transformations. Princ. of Prog. Lang. (2004). DOI ↩︎

Christopher Pulte, Dhruv C. Makwana, Thomas Sewell, Kayvan Memarian, Peter Sewell, and Neel Krishnaswami. CN: Verifying Systems C Code with SeparationLogic Refinement Types. Princ. of Prog. Lang. (2023). DOI ↩︎

Michael Sammler, Rodolphe Lepigre, Robbert Krebbers, Kayvan Memarian, Derek Dreyer, and Deepak Garg. RefinedC: Automating the Foundational Verification of C Code with Refined Ownership Types. Prog. Lang. Design and Impl. (2021). DOI ↩︎

Nikhil Swamy, Joel Weinberger, Cole Schlesinger, Juan Chen, and Benjamin Livshits. Verifying Higherorder Programs with the Dijkstra Monad. Prog. Lang. Design and Impl. (2013). DOI ↩︎

Nikhil Swamy, Cătălin Hriţcu, Chantal Keller, Aseem Rastogi, Antoine DelignatLavaud, Simon Forest, Karthikeyan Bhargavan, Cédric Fournet, PierreYves Strub, Markulf Kohlweiss, JeanKarim Zinzindohoue, and Santiago ZanellaBéguelin. Dependent Types and Multimonadic Effects in F*. Princ. of Prog. Lang. (2016). DOI ↩︎

Patrick M. Rondon, Ming Kawaguci, and Ranjit Jhala. Liquid Types. Prog. Lang. Design. and Impl. (2008). DOI ↩︎

Raven Beutner, Luke Ong, and Fabian Zaiser. Guaranteed Bounds for Posterior Inference in Universal Probabilistic Programming. Prog. Lang. Design and Impl. (2022). DOI ↩︎

Marcel Hark, Benjamin Lucien Kaminski, Jürgen Giesl, and JoostPieter Katoen. Aiming Low Is Harder: Induction for Lower Bounds in Probabilistic Program Verification. Princ. of Prog. Lang. (2020). DOI ↩︎

Zachary Kincaid, Jason Breck, Ashkan Forouhi Boroujeni, and Thomas Reps. Compositional Recurrence Analysis Revisited. Prog. Lang. Design and Impl. (2017). DOI ↩︎

Jiayi Wei, Jia Chen, Yu Feng, Kostas Ferles, and Isil Dillig. Singularity: Pattern Fuzzing for Worst Case Complexity. Found. of Softw. Eng. (2018). DOI ↩︎

Peter W. O’Hearn. Incorrectness Logic. Princ. of Prog. Lang. (2020). DOI ↩︎

Zhe Zhou, Ashish Mishra, Benjamin Delaware, and Suresh Jagannathan. Covering All the Bases: TypeBased Verification of Test Input Generators. Prog. Lang. Design and Impl. (2023). DOI ↩︎

Daniel J. Fremont, Tommaso Dreossi, Shromona Ghosh, Xiangyu Yue, Alberto L. SangiovanniVincentelli, and Sanjit A. Seshia. Scenic: A Language for Scenario Specification and Scene Generation. Prog. Lang. Design and Impl. (2019). DOI ↩︎

Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, and Vikash Mansinghka. Picture: A Probabilistic Programming Language for Scene Perception. Comp. Vision and Pattern Recognition (2015). DOI ↩︎

Marco CusumanoTowner, Alexander K. Lew, and Vikash K. Mansinghka. Automating Involutive MCMC using Probabilistic and Differentiable Programming. arXiv ↩︎

Qinheping Hu, John Cyphert, Loris D’Antoni, and Thomas Reps. Synthesis with Asymptotic Resource Bounds. Computer Aided Verif. (2021). DOI ↩︎

Yican Sun, Xuanyu Peng, and Yingfei Xiong. Synthesizing Efficient Memoization Algorithms. ObjectOriented Prog., Syst., Lang., and Applications (2023). ↩︎

Martin Hofmann. Linear types and nonsizeincreasing polynomial time computation. Logic in Computer Science (1999). DOI ↩︎

Arthur Charguéraud and François Pottier. MachineChecked Verification of the Correctness and Amortized Complexity of an Efficient UnionFind Implementation. Interactive Theorem Proving (2015). DOI ↩︎