├── 2LevelsCategories.md ├── Dependance.md ├── GenericProgrammingRevisited.md ├── InvisibleMath.md ├── LICENSE ├── ModuleSystem.md ├── PE-Revisited.md ├── README.md └── ResearchIdeas.md /2LevelsCategories.md: -------------------------------------------------------------------------------- 1 | Right now, every component in a category (there are 3 in agda-categories, namely objects, morphisms and morphism equivalence) are supposed to live at a single level / belong to a single universe. But what if we knew that some of theses were compose of 'items' which lived in a (fixed, finite, given) set of levels? 2 | 3 | The first thing to explore would be to do this for 2 levels, i.e. where all 3 parts could have pieces from 2 universes. 4 | 5 | Slice categories offer a first test case, and the coslice and comma. 6 | 7 | An obvious generalization would be an IJK-category. In fact, comma categories are probably 221-categories in that sense. 8 | 9 | These seems to be deeply related to Displayed Categories! Right now, I've started [doodling around](https://github.com/JacquesCarette/Categorical-Playground/blob/-/2Level/Category.lagda) on this. 10 | -------------------------------------------------------------------------------- /Dependance.md: -------------------------------------------------------------------------------- 1 | This is inspired by James Koppel's [blog post](https://www.pathsensitive.com/2022/09/bet-you-cant-solve-these-9-dependency.html) on "software dependence". Interested readers should also read his paper (co-authored with Daniel Jackson) [Demystifying Dependence](https://www.jameskoppel.com/files/papers/demystifying_dependence.pdf). 2 | 3 | The purpose is to see if we can model, mathematically, what that paper says. First, let's get some red herrings out of the way. 4 | 5 | A naive mathematical approach would invoke 'derivatives': we all know that constants have derivative 0, and constants are the things that "don't depend" on anything else. The problem here is that this presumes real-valuedness, continuity, and all sorts of other undesirable things. We can do better. 6 | 7 | Let's say that a function $f : X \rightarrow Y$ *is independent of* $X$ if $\forall x,y : X\ . f(x) = f(y)$. In other words, $f$ does not depend on its input at all. This definition only needs evaluation, equality, and universal quantification to make sense. Very weak requirements indeed! 8 | 9 | That definition is about independence, not dependence. As we'd like to be constructive (if possible), let's say that $f : X \rightarrow Y$ *can be shown to depend on* $X$ if $\Sigma x,y : X\ . \neg (f(x) = f(y))$, i.e. we can exhibit a pair of values of $X$ that evaluate (under $f$) to different values. 10 | 11 | But if we look at many of the examples given, we can see that this doesn't capture anywhere close to the right ideas. Again, an example: pick $f : \mathbb{R} \rightarrow \mathbb{R}$ as $f(x) = \sin(x)^2+\cos(x)^2$. Does $f$ *depend on* $x$? Does $f$ *depend on* $\sin$ and $\cos$? Using our mathematical definition, $f$ does not depend on $x$. But if you implement it using floating point, it will. In many ways, this is a spurious example, because the 'implements' relation here is not faithful. However, the other question remains: does $f$ depend on $\sin$? That is a perfectly legitimate question, but it is being asked naively. First, the question doesn't even "type check". Neither $\sin$ nor $\cos$ are in $\mathbb{R}$! What we're doing in that question is mixing syntax and semantics. 12 | 13 | Stuff to still write about 14 | - syntax and semantics 15 | - observations 16 | - allowed changes 17 | -------------------------------------------------------------------------------- /GenericProgrammingRevisited.md: -------------------------------------------------------------------------------- 1 | # Generic Programming Revisited 2 | 3 | There are multiple notions of generic programming: 4 | - polytypic, or datatype-generic programming 5 | - parametric polymorphism 6 | - "theory" polymorphism, or interface polymorphism, which are my names for the original Musser-Stepanov vision 7 | 8 | And a number of methods that really are quite like generic programming: 9 | - others kinds of polymorphism? 10 | - various kinds of "a la Carte" approaches 11 | 12 | At the end of the day, one can say that the in various languages, "generic programming" is 13 | the label that's used for the kinds of polymorphism that the language doesn't have, or at least 14 | doesn't support very well. 15 | 16 | In related work, there is: 17 | - Jeremy Gibbon's [Datatype−Generic Programming](https://www.cs.ox.ac.uk/publications/publication1397-abstract.html) chapter, 18 | - Garcia et al.'s [An extended comparative study of language support for generic programming](https://www.cambridge.org/core/journals/journal-of-functional-programming/article/an-extended-comparative-study-of-language-support-for-generic-programming/C97D5964ECC2E651EEF9A70BC50600A6) 19 | - Siek-Lumsdaine[A language for generic programming in the large](https://www.sciencedirect.com/science/article/pii/S0167642308001123) 20 | - Chetioui, Jaarvi, Haveraaen's [Revisiting Language Support for Generic Programming: When Genericity Is a Core Design Goal](https://arxiv.org/abs/2211.01678) (and its list of references) 21 | 22 | ToDo: 23 | - give full, good examples of each kind 24 | - find more papers (i.e. after the obvious survey papers) that really are about new kinds of polymorphism 25 | - redo the examples as polymorphism over an initial segment of a context/telescope. 26 | 27 | Polytypic is trickier because it involves a syntax/semantics loop. But once that's out of the way, 28 | the link to staging and generative programming becomes clear, so that the C++ approach to this 29 | does not seem ad hoc at all anymore. 30 | 31 | At the end of the day, it's all various kinds of "programming to an interface". Why it all looks so 32 | different, other than the wildly different syntaxes of different languages, is that the restrictions 33 | on the interfaces or, at the other extreme, the permissiveness of the language of interfaces, makes 34 | what one can effectively say quite different. 35 | -------------------------------------------------------------------------------- /InvisibleMath.md: -------------------------------------------------------------------------------- 1 | Proof assistants offer an interesting opportunity: revealing "invisible math", giving us a way to better understand what paper mathematics keeps implicit. This better understanding can then be used to create better features to make it implicit also in formalized versions, at least when there is a disciplined, reasoned way to do so. I consider the vast majority of current features that try to do automation to be reactionary hacks that attempt to mimic human mathematics without deeply understanding what's really going on. 2 | 3 | For example, it is possible to use latex as a mere word processor. In this mode of usage, latex is actually a major pain. But if you more deeply understand the features of latex, you can start to modify how you write papers to take advantage of these features. Slowly, it becomes quite reasonable to use latex. In many ways, this is what a lot of latex packages represent: an encoding of human knowledge that helps improve the *process* of math-heavy document writing. 4 | 5 | One of the most obvious advances that has been facilitated by proof assistants is the realization amongst mathematicians that "equality" is not so simple, and that there's a lot more to it, isomorphism and notions of 'canonical' than the literature seemed to say. It's also amusing that, upon reflection, Grothendieck already knew this, and you can see it in his writing going back to the mid 1950s. Proof assistants being a royal pain about equality is a symptom that this is a difficult issue, not something to beat down and bury!!! 6 | 7 | Namespaces is another such symptom. Mathematics is huge and much too often uses similar notation for wildly different things. Humans struggle with this, but machines even more. We should learn from this instead of trying to bury it. In particular, we should learn that being precise about the stuff that is "implicitly assumed to be visible" at any given point in a mathematical development is quite useful. That it is a fair amount of work to do so should inform us on the struggles of learners. Just because successful people are able to juggle all of this in their head doesn't mean that it should be a required skill of all future mathematicians! That becomes an exclusionary mechanism that may rule out superb problem solvers who struggle to keep large amounts of theory in their active brains. 8 | 9 | Even super simple things like "Let G be a Group" hides enormous complexity that we should be aware of: this 'speech act' in mathematics actually brings into scope 10 | - a bunch of names (a binary operation, a constant, a unary operation, the names of several proofs) 11 | - a bunch of theorems (you have a Group in scope, so its basic properties are implicitly assumed too) 12 | - a bunch of definitions and constructions And so on. That's way too much for so few words. And wildly imprecise! 13 | 14 | No wonder our computers struggle like mad when we say "Let G be a Group and M a Monoid". We've just brought into scope a bunch of conflicting things. The usual reply is "it's ok, it can all be resolved uniquely". But this is complete BS: it can only be resolved uniquely when you know a priori that the statement you've made next is correct. If you've made a mistake, then all hell breaks loose, it is no longer unique. So our language for doing mathematics is extremely unhelpful and fragile with respect to errors. 15 | 16 | I could go on and on. Proof assistants are wonderful in this respect: they are very revelatory of the 'badness' embedded in mathematical vernacular. 17 | 18 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2022, Jacques Carette 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | 1. Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | 2. Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | 3. Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /ModuleSystem.md: -------------------------------------------------------------------------------- 1 | # What is a Module (System)? 2 | 3 | This is a set of notes trying to puzzle out what a module system could be for Agda. 4 | 5 | ## Preliminary "What is" 6 | 7 | Before trying to define the full ideas, it is quite useful to first understand what 8 | might be simpler cases. 9 | 10 | ### What is a Module? 11 | 12 | That is quite straightforward: 13 | 14 | > A module is an implementation of an interface. 15 | 16 | A slightly expanded definition would read 17 | 18 | > A module is an implementation in a language L that satisfies an interface written 19 | > in a language L'. 20 | 21 | There are 3 open parameters (on purpose) in that definition: 22 | 1. The language L (and its well-formedness rules, possibly typing rules, etc) 23 | 2. The language L' (ditto) 24 | 3. The meaning of 'satisfies'. 25 | 26 | In particular, L' could be "English", L could be "pseudo-code" and satisfied could be 27 | entirely informal. At the opposite end, L and L' could be part of the same language, 28 | and satifies might be defined by the rules of that language. 29 | 30 | At this point, there is an obvious remark to make: the things called modules in 31 | Agda don't satisfy any interface! So it is worthwhile to make a detour to 32 | describe something related to modules that will let us understand what is currently 33 | implemented. 34 | 35 | ### What is a Namespace? 36 | 37 | So if we really weaken "satisfies an interface" to "exports a bunch of names" 38 | (and is well-formed). We didn't actually talk about names yet... so it's worth 39 | first expanding that out first. 40 | 41 | #### Names? 42 | 43 | There is an implicit assumption that an 'implementation' is actually an association 44 | of a bunch of (potentitally typed) terms to names. This means that a 'signature' must also 45 | match that. Unsurprisingly, that ends up looking very much like records and 46 | record signatures. 47 | 48 | Well, 'term' is not quite right, is it? That doesn't quite allow `data` and `record` 49 | declarations. What's really meant here is some kind of "definition". So rather than 50 | 'term', let's use the more general Definition for that syntactic class. 51 | 52 | #### Namespace 53 | 54 | So we have (using Haskell's `Data.These`): 55 | 56 | > A Namespace is an association of names to `These Type Definition`. 57 | 58 | Just to be clear: 59 | - `This Type` corresponds to a postulate 60 | - `That Definition` corresponds to a bare implementation 61 | - `These Type Definition` corresponds to a properly typed implementation 62 | 63 | (In Agda, using `(Type, Maybe Definition)` to disallow bare implementations is probably 64 | better?) 65 | 66 | #### Agda 67 | 68 | It should be clear that Agda's (unparametrized) `module` corresponds to a 69 | namespace. The following information in Agda's own documentation makes that 70 | very clear: 71 | 72 | > The main purpose of the module system is to structure the way names are used in a program. 73 | 74 | The use of names is of course crucial for one purpose: 75 | **being able to access them externally**. The biggest difference between 76 | modules (and namespaces) and lambda-term (and contexts) is the ability 77 | to ``zoom in'' directly to one piece, without the need to use some 78 | unary codes. (Humans are not Borg.) 79 | 80 | ### What is a Module System? 81 | 82 | The question "what is a module" is not particularly interesting, even when augmented 83 | to deal with parametrized modules. This is because having a single module around 84 | doesn't really help, the real desire is to have many modules. So we need to 85 | worry about how modules interact, refer to each other, and so on. 86 | 87 | There doesn't seem to be as simple an answer here. 88 | 89 | ## Requirements Analysis 90 | 91 | What do we even want modules (namespaces) for? 92 | 93 | At the simplest level, we know that libraries grow to be quite large, as there simply 94 | are many different concepts to express. At the absolute minimum, there needs to be 95 | tools to help organize lots of code. 96 | 97 | Luckily, code doesn't normally consist of random artifacts. It expresses various 98 | kinds of (organized) knowledge. Now, it should be clear that a purely hierarchical 99 | organization is **completely hopeless**. There are concepts that will always want 100 | to 'sit' in multiple parts of some purely tree-like hierarchy. Going to a directed 101 | acyclic graph helps a tiny bit, but doesn't solve the fundamental issue that there 102 | are non-trivial isomorphisms between bits of knowledge that seem to belong in 103 | different places, no matter how one slices things up. [Some references for this would 104 | be nice, later.] 105 | 106 | Nevertheless, Agda's own documentation says: 107 | 108 | > This is done by organising the program in an hierarchical structure of modules 109 | > where each module contains a number of definitions and submodules. 110 | 111 | This doesn't mean we have to abandon hierarchies! It merely means that the 112 | hierarchy should not be taken too seriously, and that other means to 113 | relate information must be supported too. Whether to do via allowing 114 | duplication and using post-facto isomorphisms, or going with generative 115 | morphisms, or both, is an open design question. 116 | 117 | Nevertheless, one question to look at is "how do we come up with 'good' 118 | modules?" 119 | 120 | ### Design for Change 121 | 122 | Parnas in his 1973 paper on information hiding (insert link here) provides 123 | a time-tested answer: if we know that certain things are likely to change 124 | in the future, then that aspect of the change should be **hidden** behind 125 | an abstraction barrier. One good process to follow to identify good 126 | modules is to divide the information contained in a module to that 127 | which ought to be secret (often consists of implementation details, but 128 | there are others things one may hide as well) and those that can be 129 | exposed to the world, via an interface. 130 | 131 | It's important to remember that Parnas never insisted that 'hiding' be 132 | something that must be done *in* a language, never mind being enforced 133 | by the language itself. Human-convention interfaces are perfectly 134 | acceptable. Furthermore, and wildly misunderstood as well, there is no 135 | requirement whatsoever that modules persist *at run time*. That the compiler 136 | may inline everything is perfectly allowable. Interestingly, Parnas doesn't 137 | believe that modules necessarily need to exist in the code either - as long 138 | as they exist in the design and the mapping from the design to the implementation 139 | is clear, it's ok to write "messy" implementations. A more modern way to 140 | think of this is from the point of view of generative programming (whether 141 | C++ templates, metaocaml, template haskell, or various hacked up means 142 | doesn't really matter.) 143 | 144 | Eventually, it would be really good to support design for change. But that's 145 | farther into the future. 146 | 147 | ### Current Features 148 | 149 | Normally, it we would be better to have user stories here, but for expediency, 150 | let's go with features needed/wanted. Proper requirements can be reverse 151 | engineered from them later. 152 | 153 | The features used right now are 154 | 1. accessing (i.e. Module.name) 155 | 2. instantiating 156 | 3. import 157 | 4. open 158 | 5. public 159 | 6. using / hiding 160 | 7. renaming 161 | 8. data and record modules built 'implicitly' 162 | 163 | It is worthwhile expanding what the features "mean": 164 | 1. A ModNm is a (key,value) database, with keys being names, and values being 165 | definitions. 166 | 2. A PMN is a special kind of "function" that is known to generate a ModNm. 167 | It can be partially applied. 168 | 3. 'import' is a weird feature that stands out as it is really an interface to the 169 | underlying file system, and is mainly used for its side-effect of making a (P)MN 170 | visible in the current scope. 171 | 4. open is a ModNm-to-scope operation that adds all the names in a ModNm as being 172 | directly visible in the current scope. It is side-effecting too in that sense. 173 | However, these are made visible in a 'private' manner, in the sense that they 174 | are not added to the external interface of the ModNm. 175 | 5. the public qualifier overrides the behaviour of open to make the names 176 | visible publicly. 177 | 6. using and hiding are two means the change visibility of names, i.e. 178 | - using gives an explicit list of items now being defined, 179 | - hiding explicit removes an explicit list of items from an implicit list of "all" names 180 | 7. renaming provides a way to locally change the names of definitions. 181 | 182 | ### Wanted Features 183 | 184 | 1. sharing 185 | 2. module interfaces (i.e. module types) 186 | 3. module interface combinators 187 | 188 | ## Design 189 | 190 | (This is written in a rush, to get feedback, and should not be understood as 191 | being more than a placeholder. In particular, the proposal seems like a bit of 192 | a leap given the rest of the write-up.) 193 | 194 | ### Background 195 | 196 | **Important remark**: records types, telescopes, theories and contexts are, in dependent 197 | type theory "the same thing". There are versions of all of these that can be 198 | understood as associating names to types (and more). Some even include conservative extensions, 199 | aka definitions, even though they are "at the type level". 200 | 201 | ### Proposal, Part 1 202 | 203 | Merge (at least) records (as values) and modules (as containers of implementations). 204 | Preserve a 'marker' that would recall the intent of use, so that some features can be 205 | enabled/disabled for each, if so desired. 206 | 207 | For example, it might make sense to not allow 'open' in a record declaration, 208 | as that would mean that creating record values might have side effects. But then 209 | again, one can essentially hack things up now by using anonymous modules (both 210 | surrounding a declaration and local ones). So maybe this isn't even a big deal. 211 | 212 | Pros: 213 | - significant simplification of code base 214 | - no need to 'generate' modules for records 215 | - Module types 216 | - First class modules 217 | - Generative records 218 | - Records can now have private and abstract fields 219 | - No need to define dummy records to get to use selector syntax 220 | 221 | Cons: 222 | - very large surgery 223 | - would allow dependent record fields 224 | 225 | ### Proposal, Part 2 226 | 227 | Introduce 'Let' in the Agda AST. Reason: preserve sharing. 228 | 229 | Change module instantiation to use 'Let' instead of using substitution. 230 | Furthermore, do not put independent lambdas on each 'name' in a module, 231 | but leave them be on the 'outside'. As parametrized modules are 232 | already transparent, these parameters would continue to be so (i.e. in 233 | some ways behave as if they were all implicit, whether they actually are or not.) 234 | 235 | ### Proposal, Part 3 236 | 237 | Introduce signature combinators. Different combinators would produce different 238 | results (some would give the data of a pullback, for example), so that some 239 | syntax would be needed to "extract" concrete signature objects. This is needed 240 | to enable various combinations and diagram-level combinators. 241 | 242 | ---- 243 | 244 | ## Misc Notes 245 | 246 | These will need to be moved up into the main text when the proper place 247 | appears. 248 | 249 | Below, ModNm is used for modules / namespaces, for brevity. ParModNm, and 250 | even PMN, for "parametrized modules / namespaces". 251 | 252 | ### Parametrized modules / namespaces 253 | 254 | - parameters are not ModNm themselves! They are values from a type. 255 | So here too Agda's modules are quite second-class. 256 | - PMN are basically functions which are return a MonNm, i.e. generators. 257 | - However PMN are "transparent" in that the names that are eventually bound 258 | are visible even if a ModNm is not even applied. This is useful but weird. 259 | 260 | ### Likely Needed 261 | 262 | Any next-gen version of 'module' is most likely to need a *Let* addition to the 263 | internal AST. This is the sanest way to maintain sharing. 264 | 265 | ### Problems 266 | 267 | Syntax and pattern declarations have a weird status as definitions in a module. 268 | -------------------------------------------------------------------------------- /PE-Revisited.md: -------------------------------------------------------------------------------- 1 | # Evaluation Modulo Theories or Partial Evaluation Revisited 2 | 3 | The 'simple' way to see PE is that it uses a single bit of information (static or dynamic) along with an operational semantics (defined by an interpreter) to define what PE does. Multiple applications then 'does magic'. 4 | 5 | The offline version of PE does a preliminary abstract interpretation that propagates this static/dynamic classification. Then a partial evaluator takes the marked program and reduces everything that it can. An online version computes static/dynamic as it goes, and is much more precise because of it. Still, this doesn't work so well. 6 | 7 | The first thing that is done is various program transformations that preserve contextual equivalence. PE is a program transformation, after all, so this isn't so far fetched. And the PE equations are all observational in nature too, i.e. viewing programs as input-output machines. Though, of course, for side-effecting languages, these must be preserved too. 8 | 9 | So the interpreter really is a convenience, especially when it is written in the same programming language as the programs we're evaluating: it gives an executable 'reference semantics'. But it can only capture what's visible through reductions. The validity of program transformations (like translation to CPS) must really be proven externally. 10 | 11 | The other thing that this doesn't handle so well is *partially static data*. For example, we know that terms over the integers extended with variables (i.e. dealing with open terms over (Z, 0, 1, add, mul) ) are polynomials, which have a wealth of normal forms. Most beautifully, this can all be done via arithmetic rather than rewrites, resulting in significantly faster work. This generalizes - that's what FREX is all about. 12 | 13 | Other uses of PE go further: MaplePE changes the language (to say more, carry more invariants, etc). The work on FFT in metaocaml adds some extra abstract interpretation to make things better. There's other work that uses eta-expansion. 14 | 15 | An obvious question is: does this all have something in common? The answer is yes: rather than viewing partial evaluation as getting rid of a layer of interpretation, we can also see it as a walk in the space of programs, modulo its equational theory. It is really important that the equational theory used be extended to one that deals with open terms (i.e. terms in context): most program fragments are not closed. We can then see things as reductions that look like 16 | 17 | context |- term => context' |- term' 18 | 19 | where the context is not limited to just variable declarations but can also contain relational information, i.e. anything valid from the equational theory. (That should probably be generalized further.) In a way, this is what moves us from PE to supercompilation! The point is that various parts of a program can *reduce*, or at least be rewritten to be "simpler", given the right properties hold of the current term in focus. 20 | 21 | The whole point being that this gives a uniform explanation to a lot of ad hoc seeming features one encounters in the literature. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Notes 2 | 3 | An experiment in typing up my various notes (using Markdown) instead of writing things out by hand. Not quite as polished as a blog, but in the same vein. 4 | 5 | Different research ideas are put in its own file. Right now, there are notes on: 6 | - [Dependence](Dependance.md) as a concept. 7 | - [Composite Levels](2LevelsCategories.md) as a way to say that we're pairing things up that belong in different universes, without commiting to them living in the same universe. 8 | - [Generic Programming Revisited](GenericProgrammingRevisited.md) will tell the story of how various flavours of generic programming are related. 9 | - [Module System](ModuleSystem.md) explores what a module system is / ought to be, and then tries to make that more precise for Agda. For the curious: most likely what's needed is a parametrized namespace system, rather than a full blown module system. 10 | - [InvisibleMath](InvisibleMath.md) is Andrej Bauer's wonderful term for the mathematics that is implicit in "paper math" but that is often explicit in formalized mathematics. This is indeed an opportunity afforded by proof assistants that reveals something about the process of mathematics that should be leveraged some more. 11 | 12 | As a means to not forget stuff, I also jot down [research ideas](ResearchIdeas.md). These are not so much centered around wished-for results, but rather they are a collection of questions that, once answered, ought to yield interesting results. 13 | -------------------------------------------------------------------------------- /ResearchIdeas.md: -------------------------------------------------------------------------------- 1 | A place to keep various threads that I'd like to pursue, given time. Some are questions, some are topics. 2 | 3 | - Are terms or substitutions the 'primary' item in a type theory? The most interesting object of study are derivations, that's not the question. The point is that there are interesting substitutions that are not term-like, a la polycategories. 4 | - Is there a good pedagogical way to explain the difference between "reason about" and "reason with" with it comes to languages and theories. There's a non-trivial 'level' difference that is frequently not talked about enough. 5 | - What's the relation between PE (partial evaluation), slicing and supercompilation, and NbE (normalization by evaluation). Some notes on [partial evaluation](PE-Revisited.md) are looking down that road a little bit. 6 | - NbE is related to biform theories and seems to be a particular kind of "meaning formula". Explore! 7 | - Even more related to biform theories and meaning formulas is 'Artin Gluing'. Looks like PE and NbE are also related. 8 | - Quantifier elimination is a particular instance of finding closed forms. There's probably a lot more "out there" that are also instances, but not recognized as such. 9 | - What is a type? (That's a bad question, a good question is "What is a type system?") I've got tons of notes on this, really need to write them up! 10 | - Need to remember that the duality between sums and products should yield sigma types for 'sum types' and record / Pi types (over a finite, discrete set of labels) for 'product types'. Syntactic sugar can then be added for some special cases. 11 | - pattern-matching, especially in the 'total' case, corresponds to first doing a **partition** of the type 'space', and then a continuation is applied to each case. This is related to optics (lenses are projections, where things are first made 'cartesian'.) 12 | - "marked contexts". In the case where a context is some fixed data-structure, it is useful to see it as (multi-) pointed, where the marks may indicate some phase transition, especially in the case when the context gets a natural order. Related to Bunched Logics. 13 | - -1-category theory sure seems related to various kinds of logic. 14 | - Conjecture: the category of -1-categories has 2 inhabitants iff inhabitation is decidable (i.e. likely equivalent to LEM) 15 | - Category Theory commutative diagram proofs should really be seen as 'movies'. First, the important data is not on the nodes or edges, but inside the faces. And even that is not enough to reconstruct a proof, an order needs to be given. The act of 'focusing' that is implicit in 1-categorical proofs may also need to be made explicit, in general. 16 | - In the case of containers, what should be seen as a 'constant'? Obviously it is a container with no positions. That doesn't mean it is necessarily trivial! It could still have (up to) infinitely many shapes. 17 | - Typed polymorphic syntax. Such as "in the language of rings" and "in the internal language of category C". Related to programming to interfaces, obviously. 18 | - The species Cycle has no finitely axiomatizable (in reasonable settings) equational theory. So looking at Species as related to "Free" structures (such as Bag) is too restrictive a point of view. 19 | - Where does 'induction' come from? A lot of recursors (all for inductive types) come from the counit in the free-forgetful adjunction for an equational theory. But that does not yield induction. Is the right setting bicategorical? enriched? displayed? 20 | - Agda's function type $A \rightarrow B$ is really (at the meta-level) $\Sigma (f : A \rightarrow B) (f \text{provably total in MLTT})$ where the function type on the right is all functions. Could rephrase in terms of provably-total functional relations to make less ambiguous. 21 | - Further explore the idea that a language (aka syntax) is the same thing as a coordinate system. (And remember that coordinate-free linear algebra can be much nicer for semantics, and horrible for actual computations.) 22 | - (already posted on Mastodon, but there might be more legs to it.) 23 | language (aka syntax) is a coordinate system and vice versa. 24 | In linear algebra, a coordinate system is a basis. Abstract linear algebra, i.e. basis free, can be much nicer than working in a specific basis. Of course, to do concrete computations, a basis is required. 25 | In physics, coordinate systems can be crucial: doing orbital mechanics in cartesian coordinates, instead of spherical, is completely crazy. 26 | But this argues for the need to have many (vastly different!) programming languages for different tasks. 27 | --------------------------------------------------------------------------------