Resources for Prospective Students

(Last updated on Apr 8, 2024)

I am broadly interested in topics related to programming languages, especially formal verification, program analysis, and probabilistic programming. I am currently working on resource-safe system programming, programmable Bayesian inference, quantitative program analysis, and proof-oriented programming languages.

  • If you want to learn about possible projects and research directions that I would like to advise or collaborate on at the moment, check out the list below.
  • If you want to know more about my long-term perspectives of incorporating randomness in software, check out my research statement that I wrote for my faculty applications.
  • If you care about teaching and mentoring, check out my teaching statement.
  • If you have concrete questions about me or Peking University, check out the Q&A below, and send me an email if your question is not covered there.

Q&A

Are you recruiting students?
Yes! I am just getting started and planning to ramp up slowly (one to two PhD student a year). I am also looking for self-motivated master and undergraduate students.
Do I need to have a specific research background? Do I need to know a specific problem to work on?
The short answer is "no." Programming-languages or software-engineering research experience would be a plus but not a requisite. It is more important to know what you are interested in and be passionate enough about a broad area of problems to spend about four to five years to work on.
Can I do machine learning / operating systems / database / ...?
Yes, if you are interested in the intersection of programming languages and the other field. Check out these interdisciplinary workshops: Symposium on Machine Programming, Workshop on Languages for Inference, Workshop on Programming Languages and Operating Systems, and Symposium on Database Programming Languages.
Why should I consider Peking University? Who else could I work with at Peking University?
Peking University has a world-class Computer Science Department (#14 according to CSRankings (2013-2023) and #19 according to QS Rankings (2023)). We have a large department with many amazing faculty and students working on software engineering, programming languages, systems, and other areas. Check out the following professors and their students, in last-name alphabetical order: Hao, Dan; Hu, Zhenjiang; Xie, Tao; Xiong, Yingfei; Zhang, Lu; and Zhang, Xin.
You did not reply to my email!
Please wait for one week, then resend it.

Current Research Directions

I always welcome new collaborators on the following (incomprehensive) list of research topics. Please contact me if you are interested in working on one of those directions.

Resource-Safe System Programming

Resource usage (e.g., time, memory, and energy) of a program is one of the central subjects of computer science. However, resource usage usually does not play a central role in programming language theory such as formal semantics, static analysis, type systems, and program logics. I am generally interested in most resource-analysis projects, including both verification (e.g., of software implemented in an existing programming language) and language design (e.g., for easier and more precise resource analysis).

Analysis of resource usage becomes increasingly critical for system programming. The recent success of Rust indicates that modern system programming languages should focus on static safety guarantees, such as memory safety and thread safety. There are a lot of programming languages that do provide such guarantees, but mostly via runtime mechanisms (such as garbage collection), and those mechanisms result in unsatisfactory performance, especially in system software. Rust, by proposing a memory-safety type system, reduces runtime resource consumption and thus achieves high performance. The performance of system software does not only involve memory management, though. I am interested in proposing a paradigm of resource-safe programming, where resource consumption of programs are statically verified to meet developers’ expectation, and the programs do not exhibit multiple kinds of resource-related vulnerabilities, such as algorithmic complexity attacks and side-channel attacks. Below is a sample of medium- to long-term projects of resource-safe system programming:

  • Resource analysis of Rust: Can the fine-grained memory model improve the precision and efficiency of resource analysis?
  • Resource analysis of concurrent programs: Rust has limited support for static thread-safety guarantees, e.g., Rust does not statically rule out deadlocks. Can we incorporate fine-grained concurrency model in system programming and extend resource analysis to support it?
  • Target-level resource analysis: How can we analyze compiled programs with respect to low-level resource metrics, e.g., the number of clock cycles? The analysis needs to be hardware-specific because we need to take cache, pipeline, branch prediction, etc. into consideration.
  • Discovery of resource-related security vulnerabilities: Can we detect algorithmic-complexity-attack bugs and/or side-channel-attack bugs? In addition, if such bugs exist, can we automatically synthesize an attacker to trigger the bugs?

At the moment, my favored approach is Automatic Amortized Resource Analysis (AARA1) but I am open to other approaches such as recurrence solving, sized types, ranking functions, symbolic resource analysis, etc.

Programmable Bayesian Inference

In contrast to frequentist methods like deep learning, Bayesian learning accounts for the probability distribution of hypotheses that produce the observed data and thus naturally quantifies the uncertainty that is present in the learned models. In addition, Bayesian learning enables the straightforward incorporation of domain knowledge and generalizes well to unseen data. These advantages of Bayesian learning are realized by probabilistic programming languages (PPLs) like Stan and Pyro, which provide an interface that separates model development from various built-in Bayesian inference algorithms.

The main downside of Bayesian learning is that probabilistic inference is computationally hard and not even the most advanced inference algorithms work well for all models. To make probabilistic inference for Bayesian learning feasible for more models and larger data sets, some PPLs have there forgone the strict separation of modeling and inference by enabling users to customize specific Bayesian inference algorithms through so-called programmable inference2. However, it is all too easy for users to incorrectly program inference in a way that breaks convergence and leads to unsound learned models. Such a mistake can even go unnoticed.

I am interested in dissipating the tension between soundness and flexibility of probabilistic inference by applying programming language techniques such as type systems and static analysis. The vision is that new programming abstractions and type systems ensure the soundness of programmable inference while static analysis and program synthesis assist users by automating the customization of inference with to goal of improving efficiency. Our PLDI paper is a concrete demonstration of this research direction.

Quantitative Program Analysis

Randomness in programs can show up in two ways: external randomness (e.g., uncertainty from the environment) and internal randomness (e.g., randomized algorithms). As Kleene algebras become an algebraic foundation for analysis of non-probabilistic programs3, I am interested in developing an algebraic framework for analysis of probabilistic programs. To this end, we need to formulate an algebraic semantic framework of probabilistic programs, and develop a generic algorithm to solve program-analysis instances for probabilistic programs.

One fundamental difference between non-probabilistic and probabilistic programs is that an execution of a non-probabilistic program is a chain, but an execution of a probabilistic program is a tree. To see the point, consider that our program model allows nondeterminism. A non-probabilistic program then corresponds to a collection of possible execution chains, but a probabilistic program should be interpreted as a collection of distributions, each of which can be encoded as an execution tree, where each rooted path (annotated with a probability) is a concrete run of the program. See our MFPS paper for a more formal presentation based on hyper graphs.

Abstract interpretation is a powerful framework for describing and solving program-analysis instances. Researchers have proposed generic solving strategies, such as chaotic iteration, widening, narrowing, etc. However, the iterative solving algorithms do not fit into the quantitative nature of probabilistic programs. Suppose we want to analyze a probabilistic loop “$\mathbf{while}~(\mathbf{prob}(3/4))~\{ x = x + 1; \}$.” The expected delta of the variable $x$ is the least solution to the equation $r = f(r) \equiv (1/4) \cdot 0+(3/4) \cdot (r+1)$, and we can analytically solve it directly: $r = 3$. However, an iterative solving strategy would approach $r$ by a sequence of approximations: $\{ f^i(0) \}_{i \in \mathbb{N}}$, which will not converge in any finite number of iterations. I am interested in adapting Newtonian Program Analysis4 to the probabilistic setting, with the observation that Newton’s method is more suitable for such a quantitative setting.

Moreover, the proposed algebraic framework of reasoning about probabilistic programs should, in principle, be also suitable for reasoning about other quantitative properties, such as resource consumption. It would be interesting be see if this observation leads to a more general quantitative program analysis framework. A recent paper by Batz et al. proposed a notion of weighted programming5, which further generalizes the quantitative properties. Applications of the proposed quantitative program analysis framework include technology transfer between probabilistic analysis and resource analysis; for example, Maximum-a-Posteriori (MAP) estimation has a very similar problem statement to Worst-Case Analysis (WCA), so it would be interesting to use the observation to build new statistical resource analysis techniques.

Other Projects

I am always open to new directions, including interdisciplinary ones. Below is a sample of initial ideas that I am interested in exploring in the short (or medium) term. The list is not complete, and be aware that most of the thoughts are not matured at all; in fact, I would expect some of them will quickly lead to negative results.

Static Analysis

Library Summarization via Tensor Products

Problem: Library summarization is an effective way to accelerate the analysis of client code. However, information about the client is unknown at the library summarization, preventing complete summarization of the library. A state-of-the-art approach is Conditional Dyck-CFL Reachability Analysis (ConCRA), which targets graph-reachability-like static analyses and uses hypothetical summaries.

Proposal: Lal et al.6 proposed a tensor-product principle: Tensor products with an appropriate detensor operation allow computations to be rearranged in certain ways; for example, they can be used to delay a multiplication in a chain of multiplications. This principle may allow us to generalize, in the summarization framework, Dyck-CFL-reachability-based analyses to algebraic semiring-based analysis.

Follow-up: The library summarization problem can be generalized to partially solving an equation system for a static analysis. Any thoughts on partial evaluation?

NPA-TP for Non-idempotent Semirings

Problem: Esparza et al.4 generalized Newton’s method to a method for finding fixed-points of systems of equations over semirings, leading to a new algorithm to solve interprocedural dataflow analysis. Reps et al.7 developed NPA-TP, which extended Esparza’s framework with an improved algorithm for solving linear-context-free-language (LCFL) sub-problems. However, NPA-TP assumes the underlying abstract domain admits an idempotent semiring; such an assumption rules out interesting analysis domains, especially for numerical properties, e.g., the reaching probability.

Proposal: Esparza’s original framework does support non-idempotent semirings, so it would be a good starting point. The NPA-TP framework features tensor products to solve LCFL sub-problems, and from the perspective of algebras, the principle of tensor products does not require idempotence.

Follow-up: Some recent work shows that quantum computation also needs non-idempotent semirings. Can the idea of NPA-TP be generalized to analyze quantum programs?

Complicated Probability Manipulation

Problem: In many programs, the manipulation of probability is very complicated and even manual analysis of their correctness is difficult. Consider the problem below:

x = p;
while (prob(0.5)) {
  x = 2 * x;
  if (x >= 1) {
    x = x - 1;
  }
}

We want to verify that at the end of the loop, the probability of the event $x \ge \frac{1}{2}$ is exactly $p$.

Proposal: Probability-generating functions provide a succinct way to describe probability distributions of discrete random variables. We might try to derive the probability-generating function of a probabilistic program to reason about the resulting distribution of the program. A recent paper8 is on this direction.

Follow-up: There is a connection between derivatives of the probability-generating function of a random variable $X$ and the moments of $X$. Can we use the fact to perform moment analysis?

Fine-grained Heap Abstraction via Regular Patterns

Problem: Static analysis of heap-manipulation programs often involves heap sensitivity. In the simplest case, all memory allocations with the same program location are treated as a single allocation (thus in a context-insensitive way). Advanced techniques include three-value-logic for shape analysis, $k$-CFA, $m$-CFA9, etc. However, they seem insufficient when we want to analyze recursive programs. (Imagine that in functional programming, iterating over a list is usually implemented as a recursive function.)

Proposal: We can treat the set of all call-strings as a language, then different strategies for handling sensitivity are essentially different ways of defining equivalence classes on the language. A presumably good proposal is to use regular patterns to classify equivalence classes. For example, $(ff)^*$ indicates calling a function $f$ for an even number of times, $f(ff)^*$ for an odd number.

Follow-up: In fact, this approach should generalize to all sensitivities that use call strings. Is it meaningful to develop a general framework with regular-pattern-sensitivity?

Formal Semantics

Combination of Continuous Distributions and Nondeterminism-first

Problem: Nondeterminism-first means that we resolve nondeterminism prior to program inputs when defining denotational semantics. For example, suppose that a deterministic function has the signature $A \to B$, then a standard resolution of nondeterminism gives us $A \to \wp(B)$, but nondeterminism-first suggests $\wp(A \to B)$. Nondeterminism-first can be useful for e.g., compile-time nondeterminism. A recent study proposed a denotational semantics for probabilistic programs that combines discrete distributions (on a countable state space) and nondeterminism-first. However, it remains open if the semantics can be extended to support continuous distributions.

Proposal: In fact, I do not have any ideas yet.

Follow-up: What about a computational theory for such a combination?

Combination of Probability and Message-passing Concurrency

Problem: The common treatment of a probabilistic flip in the semantics is to “split” the current program configuration into two, one for the “world” where the flip shows heads and the other for tails. When we need to deal with multiple concurrent processes, such a treatment means a local flip in a process results in duplicating all other processes. Such a non-local (thus non-compositional) behavior has been shown to be problematic for proving soundness of formal methods (e.g., type systems, which are usually compositional).

Proposal: The result of a probabilistic flip should remain as local as possible. We have recently developed an operational semantics with the desirable locality, but I feel that a more algebraic representation is better. For example, we might want to rearrange the operational semantics to reduction rules and congruence rules. Furthermore, we should prove an equivalence result to the ordinary probabilistic semantics (e.g., Markov chains).

Follow-up: I can always imagine that adding nondeterminism causes a lot of troubles.

Building a Framework for Nondeterminism-as-refinement

Problem: Nondeterminism is often referred to as the basis of abstraction and refinement in the setting of stepwise developement10. However, such a pattern has not been popular in modern programming languages. What is the main obstable for developing such a framework? Also, people have developed many kinds of semantics for nondeterminism. Which semantics should be the most suitable one for refinement-based programming and reasoning?

Proposal: I guess this has something to do with relational Hoare Logics11. At least they provide a way to use nondeterminism in a formal framework of proving program properties.

Follow-up: How to devise a language design for it?

Type Systems

Safe Interaction between Typed Domain Specific Languages

Problem: Consider we have a typed programming language that is suitable for developing embedded Domain Specific Languages (eDSLs). Every eDSL can directly use the base type system from the host language, but it can also have its own advanced domain-specific type system. Languages such as MetaOCaml have already supported such a mechanism, but the interaction between the host language and the eDSL, or maybe even between different eDSLs, is not very convenient yet. It would be nice to have a simple and safe interaction scheme among eDSLs on a same host language.

Proposal: In fact, I do not have any ideas yet.

Follow-up: What else should we have for such a flexible eDSL framework? For example, we might want to consider multi-stage compilation, debugger, mixed paradigm, etc.

Lightweight Shape Check for Array Programming

Problem: Array programming (or tensor programming) is pervasive nowadays because of machine learning. Many machine-learning frameworks lack the ability to statically check tensor shapes are always consistent during program execution. Approaches based on dependent/refinement/indexed types all seem too heavy to become practical.

Proposal: In many (really?) cases, when the input of an array program is fixed, the shapes of all intermediate arrays are also fixed. So we can perform a static shape check after the input is given, but before we execute the program, by propagating the concrete shape information from the given input.

Follow-up: Is it possible to automatically learn the shape-related effects of library functions (e.g., broadcast_tensor)?

Integration of Refinement Types and Separation Logic

Problem: Recently, people have developed verification systems, e.g., CN12 and RefinedC13, which integrate refinement types and separtion logic to reason about low-level programs. In those systems, refinements seem to still encode purely functional properties (really?). How about enabling type refinements to be separation-logic propositions? To that end, such refined types are not structural any more and we need to work with substructural refinemend types. Does the integration make sense?

Proposal: It seems that we can start from linear refinement types.

Follow-up: Linear typing and separation logic share a lot of similarities. Are both rooted back in linear logic? If not, what is the fundamental difference?

Liquid-type-style Dijkstra Monads

Problem: Dijkstra monads14 were originally proposed to automate the verification-condition-generation (VCG) routine for analyzing effectful higher-order programs. They later form the basis of F*, a proof-oriented programming language15. One feature of F* is that it supports proof-irrelevant refinement types. Liquid types16 provide another method of balancing between automatic type inference and dependent typing. One main idea of liquid types is to separately design a language for specification. Can we combine Dijkstra monads and liquid types? What are the benefits—if there are—to do this?

Proposal: So the system is basically a comination of liquid-style language design and an F*-style type inference.

Follow-up: Dijkstra monads support multi-monadic effects in a systematic way. This means we might have a method to extend our ICFP paper to include the value interpretation as in the language itself.

Resource Analysis

Fraction Bounds for Expected Cost Analysis

Problem: Most of current resource-analysis techniques derive linear, polynomial, or exponential bounds. However, when analyzing probabilistic programs, we often encounter bounds that involve fractions. For example, the simple loop while (prob(p)) tick(1); has an expected cost of $\frac{p}{1-p}$, where $p$ is a program variable that denotes a non-trivial probability.

Proposal: In many cases, a fraction bound can be expressed as a polynomial over another polynomial. So it should be possible to extend current template-based resource-analysis techniques (e.g., AARA) that derive polynomial bounds, by using a template for the denominator to reduce fraction-bound inference to polynomial-bound inference.

Follow-up: A random thought: how can we apply such resource-analysis technique to Bayesian inference (like a recent paper by Beutner et al.17)?

AARA for Probabilistic Functional Programs with Non-monotone Resources

Problem: Our ICFP paper presents an extension of AARA that infers upper bounds on the expected cost of probabilistic functional programs with monotone resources (such as time). In general, combining probability and non-monotone resources (such as memory) is unreasonably complex18. Applying optional stopping theorems, people have successfully developed techniques for analyzing arithmetic programs. It would be interesting to see if the whole methodology applies to functional programming, where we have inductive data structures (that can hold potential) instead of numbers.

Proposal: One version of optional stopping theorems states that the expected change in one evaluation step should be “bounded.” This should intuitively hold for functional programming (I mean, in most cases) because one step should change the size of a data structure by at most one (consider the case of cons). By constructing a suitable Markov chain, we might be able to reason about probabilistic functional programs with non-monotone resources.

Follow-up: Optional stopping theorems always talk about “expected change” and “expected termination time.” Why them?!

Newtonian Resource Analysis

Problem: Apply the Newtonian Program Analysis (NPA) framework4 and its extension7 to carray out resource analysis. Existing work has shown the possibility to perform recurrence-based analysis19. How about other kinds of resource analysis, like AARA?

Proposal: Because amortized analysis is naturally two-vocabulary, it seems possible to recast AARA as a two-vocabulary abstract domain. The gap is that AARA is constraint-based, which differs from the general NPA framework.

Follow-up: NPA basically provides a framework for analyzing non-linear-recursive programs by iteratively analyzing linear-recursive programs. Can we combine this idea to iteratively use AARA to solve linear recursion?

Worst-case Lower-bound Resource Analysis

Problem: Most work on resource analysis focuses on worst-case upper bounds, but in practice people also want to know worst-case lower bounds, i.e., whether there exists a realizable execution path leading to the bound. This could usually be done by testing techniques for worst-case analysis, e.g., fuzzing20. Recent advances on incorrectness logic21 and coverage typing22 show the power of suhc realizability analysis. We want to follow those lines of work to build type systems or program logics to reason about worst-case lower bounds.

Proposal: From the perspective of incorrectness logic, one can extend separation logic with time credits. From the perspective of coverage typing, one can extend liquid resource types.

Follow-up: Is it possible to integrate such deductive analysis with fuzzing-based worst-case analysis?

Probabilistic Programming

Domain-specific Procedural Design

Problem: Daniel Ritchie suggested generative probabilistic programming for procedural modeling and design (the use of random programs to generate visual content with respect to aesthetic or functional constraints). Such an approach has been rediscovered in several individual developments, such as Scenic23 and Picture24. So instead of separate developments, we want to develop a framework to ease such domain-specific procedural design.

Proposal: This has something to do with the eDSL framework.

Follow-up: It might be interesting to see how such static procedural design evolves to dynamic procedural design (e.g., synthesis of an environment). Another direction is interactive refinement (e.g., incorporating feedbacks from users) of the generated designs.

Language Design for Sound Involutive MCMC

Problem: A recent paper by Cusumano-Towner et al.25 studies how to ease the burden of implementing involutive MCMC by combining probabilistic programming and differentiable programming. It still remains a challenge (emm, you have to double check) to statically verify the correctness of a concrete implementation in their framework.

Proposal: I would try to combine type systems for sound programmable inference and sound differentiable programming.

Follow-up: I am thinking of The Next XXX MCMC Algorithms.

Quantum Probability and Bayesian Inference

Problem: Quantum physics uses a form of complex probability. People have been thinking of it as a serious probability theory (see this and this). There are even quantum analogs of Bayesian inference (one and the other). Does it make sense to incorporate some of those ideas in current probabilistic programming systems?

Proposal: I still have to learn quantum physics (emm, not really).

Follow-up: Some people are trying to add quantum features to standard Bayesian models (check out this).

Program Synthesis

Interface Synthesis for Database Tables

Problem: One central function of low-code programming platforms is to automatically generate usable interface for manipulating database tables. In an application, the developers might want to customize a specific way to organize the interface, and they want to reuse the customization across the application.

Proposal: The customization can be thought as a functor that takes a database-table module as its input and outputs an inference module for that table. For low-code programming, it would be useful to have a mechanism where the developers customize a table interactively and the platform automatically synthesize the code of the customization functor.

Follow-up: Low-code programming involves many UI-related customization. Is there a more general principle? For example, it might be interesting to investigate Natural Programming.

Resource-guided Synthesis and Implicit Computational Complexity

Problem: Recently, resource-guided synthesis and algorithm synthesis have received considerable attention26 27. We also had published a related PLDI paper. Those approaches can be categorized into two sorts: one develops a complex type system of program logic to enforce resource guarantees, the other proposes algorithmic templates that ensure resource guarantees. The latter is somewhat related to Implicit Computational Complexity (ICC), which characterizes algorithms by constraints on the way in which they are constructed. In other words, an ICC research usually provides a non-standard programming language, in which every legal program is guaranteed to have certain complexity. Can we generalize the template-based approach to ICC-based approach?

Proposal: A starting point could be non-size-increasing computaion28.

Follow-up: ICC seems to be not so fine-grained, e.g., usually researchers focus on whether the complexity is polynomial or not. How to make them finer-grained? For example, would it be possible to develop an ICC class for $O(n \log n)$ algorithms?

Program Verification

Verifying Time/Memory Complexities in VST

Problem: Verified Software Toolchain (VST) is a set of verified tools that formally verify the functional correctness of C programs using Hoare logic and separation logic. I wonder if it is also convenient to verify resource-usage properties of C programs, like their time or memory complexities.

Proposal: It has been shown that separation logic naturally supports resource verification based on time credits29. So conceptually it should be possible to have that in VST.

Follow-up: Can we develop some general reasoning libraries for resource verification?

Backward Proving in Coq

Problem: Coq proofs are organized in a forward, tactic-based manner, which is not very readable. In contrast, Isabelle (and some other theorem provers) provides a backward mode, where the intermediate goals are stated explicitly, so that improves readability. Is it possible to use a backward pattern in Coq proofs? If not, would it be possible to have a backward-proof DSL that can be translated to Coq?

Proposal: Isabelle’s Isar might be a good starting point.

Follow-up: What is the best proof interface for system programmers?

Refinement-based System Programming

Problem: Many verification projects (e.g., seL4) start from a high-level specification implemented in a functional language (e.g., Haskell) and then prove a concrete implementation in a system programming language (e.g., C) refine the specification. From the perspective of software engineering, this suggests another way of system programming: first implement a functional specification and then (iteratively) refine it to build a concrete low-level implementation.

Proposal: Build a language where the high-level and low-level languages can interoperate seamlessly, e.g. Haskell and C. Existing solutions are somehow quite complex, in the sense that they want to interoperate with a realistic C compiler directly. However, it would be nice enough for the development process if the C part is just supported partially, but with better support, e.g., to be equipped with an interpreter.

Follow-up: What is the best practice for multi-language (i.e., multilingual) software engineering?


  1. Jan Hoffmann and Steffen Jost. Two Decades of Automatic Amortized Resource Analysis. Math. Struct. Comput. Sci. (2022). DOI ↩︎

  2. Vikash K. Mansinghka, Ulrich Schaechtle, Shivam Handa, Alexey Radul, Yutian Chen, and Martin Rinard. Probabilistic Programming with Programmable Inference. Prog. Lang. Design and Impl. (2018). DOI ↩︎

  3. Zachary Kincaid, Thomas Reps, and John Cyphert. Algebraic Program Analysis. DOI ↩︎

  4. Javier Esparza, Stefan Kiefer, and Michael Luttenberger. Newtonian Program Analysis. J. ACM (2010). DOI ↩︎ ↩︎ ↩︎

  5. Kevin Batz, Adrian Gallus, Benjamin Lucien Kaminski, Joost-Pieter Katoen, and Tobias Winkler. Weighted Programming: A Programming Paradigm for Specifying Mathematical Models. Object-Oriented Prog., Syst., Lang., and Applications (2022). DOI ↩︎

  6. Akash Lal, Tayssir Touili, Nicholas Kidd, and Thomas Reps. Interprocedural Analysis of Concurrent Programs Under a Context Bound. Tools and Algor. for the Constr. and Anal. of Syst. (2008). DOI ↩︎

  7. Thomas Reps, Emma Turetsky, and Prathmesh Prabhu. Newtonian Program Analysis via Tensor Product. Princ. of Prog. Lang. (2016). DOI ↩︎ ↩︎

  8. Mingshuai Chen, Joost-Pieter Katoen, Lutz Klinkenberg, and Tobias Winkler. Does a Program Yield the Right Distribution?. Computer Aided Verif. (2022). DOI ↩︎

  9. Matthew Might, Yannis Smaragdakis, and David Van Horn. Resolving and Exploiting the $k$-CFA Paradox: Illuminating Functional vs. Object-Oriented Program Analysis. Prog. Lang. Design and Impl. (2010). DOI ↩︎

  10. Edsger W. Dijkstra. A Discipline of Programming. Prentice Hall PTR (1997). DOI ↩︎

  11. Nick Benton. Simple Relational Correctness Proofs for Static Analyses and Program Transformations. Princ. of Prog. Lang. (2004). DOI ↩︎

  12. Christopher Pulte, Dhruv C. Makwana, Thomas Sewell, Kayvan Memarian, Peter Sewell, and Neel Krishnaswami. CN: Verifying Systems C Code with Separation-Logic Refinement Types. Princ. of Prog. Lang. (2023). DOI ↩︎

  13. Michael Sammler, Rodolphe Lepigre, Robbert Krebbers, Kayvan Memarian, Derek Dreyer, and Deepak Garg. RefinedC: Automating the Foundational Verification of C Code with Refined Ownership Types. Prog. Lang. Design and Impl. (2021). DOI ↩︎

  14. Nikhil Swamy, Joel Weinberger, Cole Schlesinger, Juan Chen, and Benjamin Livshits. Verifying Higher-order Programs with the Dijkstra Monad. Prog. Lang. Design and Impl. (2013). DOI ↩︎

  15. Nikhil Swamy, Cătălin Hriţcu, Chantal Keller, Aseem Rastogi, Antoine Delignat-Lavaud, Simon Forest, Karthikeyan Bhargavan, Cédric Fournet, Pierre-Yves Strub, Markulf Kohlweiss, Jean-Karim Zinzindohoue, and Santiago Zanella-Béguelin. Dependent Types and Multi-monadic Effects in F*. Princ. of Prog. Lang. (2016). DOI ↩︎

  16. Patrick M. Rondon, Ming Kawaguci, and Ranjit Jhala. Liquid Types. Prog. Lang. Design. and Impl. (2008). DOI ↩︎

  17. Raven Beutner, Luke Ong, and Fabian Zaiser. Guaranteed Bounds for Posterior Inference in Universal Probabilistic Programming. Prog. Lang. Design and Impl. (2022). DOI ↩︎

  18. Marcel Hark, Benjamin Lucien Kaminski, Jürgen Giesl, and Joost-Pieter Katoen. Aiming Low Is Harder: Induction for Lower Bounds in Probabilistic Program Verification. Princ. of Prog. Lang. (2020). DOI ↩︎

  19. Zachary Kincaid, Jason Breck, Ashkan Forouhi Boroujeni, and Thomas Reps. Compositional Recurrence Analysis Revisited. Prog. Lang. Design and Impl. (2017). DOI ↩︎

  20. Jiayi Wei, Jia Chen, Yu Feng, Kostas Ferles, and Isil Dillig. Singularity: Pattern Fuzzing for Worst Case Complexity. Found. of Softw. Eng. (2018). DOI ↩︎

  21. Peter W. O’Hearn. Incorrectness Logic. Princ. of Prog. Lang. (2020). DOI ↩︎

  22. Zhe Zhou, Ashish Mishra, Benjamin Delaware, and Suresh Jagannathan. Covering All the Bases: Type-Based Verification of Test Input Generators. Prog. Lang. Design and Impl. (2023). DOI ↩︎

  23. Daniel J. Fremont, Tommaso Dreossi, Shromona Ghosh, Xiangyu Yue, Alberto L. Sangiovanni-Vincentelli, and Sanjit A. Seshia. Scenic: A Language for Scenario Specification and Scene Generation. Prog. Lang. Design and Impl. (2019). DOI ↩︎

  24. Tejas D. Kulkarni, Pushmeet Kohli, Joshua B. Tenenbaum, and Vikash Mansinghka. Picture: A Probabilistic Programming Language for Scene Perception. Comp. Vision and Pattern Recognition (2015). DOI ↩︎

  25. Marco Cusumano-Towner, Alexander K. Lew, and Vikash K. Mansinghka. Automating Involutive MCMC using Probabilistic and Differentiable Programming. arXiv ↩︎

  26. Qinheping Hu, John Cyphert, Loris D’Antoni, and Thomas Reps. Synthesis with Asymptotic Resource Bounds. Computer Aided Verif. (2021). DOI ↩︎

  27. Yican Sun, Xuanyu Peng, and Yingfei Xiong. Synthesizing Efficient Memoization Algorithms. Object-Oriented Prog., Syst., Lang., and Applications (2023). ↩︎

  28. Martin Hofmann. Linear types and non-size-increasing polynomial time computation. Logic in Computer Science (1999). DOI ↩︎

  29. Arthur Charguéraud and François Pottier. Machine-Checked Verification of the Correctness and Amortized Complexity of an Efficient Union-Find Implementation. Interactive Theorem Proving (2015). DOI ↩︎