From 9ef185c4eb00af9e9ebba145a69706e3716f456b Mon Sep 17 00:00:00 2001 From: emarsden <emarsden> Date: Sun, 16 Feb 2003 17:44:51 +0000 Subject: [PATCH] Improvements to the Internals manual: - added information on the linkage-table feature, that was written by Raymond Toy - added a little information on the Info database - more use of verbatim environments so that LaTeX formats things correctly - removed obsolete information regarding source organisation at CMU --- docs/internals/architecture.tex | 113 +----------------------- docs/internals/compiler-overview.tex | 24 ++--- docs/internals/debugger.tex | 13 +-- docs/internals/design.tex | 11 ++- docs/internals/environment.tex | 20 +++++ docs/internals/fasl.tex | 4 +- docs/internals/front.tex | 126 +++++++++++++++------------ docs/internals/interface.tex | 2 +- docs/internals/interpreter.tex | 25 +++--- docs/internals/lowlev.tex | 75 ++++++++++++++++ docs/internals/middle.tex | 44 ++++++---- docs/internals/object.tex | 4 +- docs/internals/run-time.tex | 3 +- docs/internals/vm.tex | 12 +-- 14 files changed, 254 insertions(+), 222 deletions(-) diff --git a/docs/internals/architecture.tex b/docs/internals/architecture.tex index 747cc8057..18db1beec 100644 --- a/docs/internals/architecture.tex +++ b/docs/internals/architecture.tex @@ -2,117 +2,10 @@ \chapter{Package and File Structure} -\section{RCS and build areas} - -The CMU CL sources are maintained using RCS in a hierarchical directory -structure which supports: -\begin{itemize} -\item shared RCS config file across a build area, - -\item frozen sources for multiple releases, and - -\item separate system build areas for different architectures. -\end{itemize} - -Since this organization maintains multiple copies of the source, it is somewhat -space intensive. But it is easy to delete and later restore a copy of the -source using RCS snapshots. - -There are three major subtrees of the root \verb|/afs/cs/project/clisp|: -\begin{description} -\item[rcs] holds the RCS source (suffix \verb|,v|) files. - -\item[src] holds ``checked out'' (but not locked) versions of the source files, -and is subdivided by release. Each release directory in the source tree has a -symbolic link named ``{\tt RCS}'' which points to the RCS subdirectory of the -corresponding directory in the ``{\tt rcs} tree. At top-level in a source tree -is the ``{\tt RCSconfig}'' file for that area. All subdirectories also have a -symbolic link to this RCSconfig file, allowing the configuration for an area to -be easily changed. - -\item[build] compiled object files are placed in this tree, which is subdivided -by machine type and version. The CMU CL search-list mechanism is used to allow -the source files to be located in a different tree than the object files. C -programs are compiled by using the \verb|tools/dupsrcs| command to make -symbolic links to the corresponding source tree. -\end{description} - -On order to modify an file in RCS, it must be checked out with a lock to -produce a writable working file. Each programmer checks out files into a -personal ``play area'' subtree of \verb|clisp/hackers|. These tree duplicate -the structure of source trees, but are normally empty except for files actively -being worked on. - -See \verb|/afs/cs/project/clisp/pmax_mach/alpha/tools/| for -various tools we use for RCS hacking: -\begin{description} -\item[rcs.lisp] Hemlock (editor) commands for RCS file manipulation - -\item[rcsupdate.c] Program to check out all files in a tree that have been -modified since last checkout. - -\item[updates] Shell script to produce a single listing of all RCS log - entries in a tree since a date. - -\item[snapshot-update.lisp] Lisp program to generate a shell script which -generates a listing of updates since a particular RCS snapshot ({\tt RCSSNAP}) -file was created. -\end{description} - -You can easily operate on all RCS files in a subtree using: -\begin{verbatim} -find . -follow -name '*,v' -exec <some command> {} \; -\end{verbatim} - -\subsection{Configuration Management} - -config files are useful, especially in combinarion with ``{\tt snapshot}''. You -can shapshot any particular version, giving an RCSconfig that designates that -configuration. You can also use config files to specify the system as of a -particular date. For example: -\begin{verbatim} -<3-jan-91 -\end{verbatim} -in the the config file will cause the version as of that 3-jan-91 to be checked -out, instead of the latest version. - -\subsection{RCS Branches} - -Branches and named revisions are used together to allow multiple paths of -development to be supported. Each separate development has a branch, and each -branch has a name. This project uses branches in two somewhat different cases -of divergent development: -\begin{itemize} -\item For systems that we have imported from the outside, we generally assign a -``{\tt cmu}'' branch for our local modifications. When a new release comes -along, we check it in on the trunk, and then merge our branch back in. - -\item For the early development and debugging of major system changes, where -the development and debugging is expected to take long enough that we wouldn't -want the trunk to be in an inconsistent state for that long. -\end{itemize} - -\section{Releases} - -We name releases according to the normal alpha, beta, default convention. -Alpha releases are frequent, intended primarily for internal use, and are thus -not subject to as high high documentation and configuration management -standards. Alpha releases are designated by the date on which the system was -built; the alpha releases for different systems may not be in exact -correspondence, since they are built at different times. - -Beta and default releases are always based on a snapshot, ensuring that all -systems are based on the same sources. A release name is an integer and a -letter, like ``15d''. The integer is the name of the source tree which the -system was built from, and the letter represents the release from that tree: -``a'' is the first release, etc. Generally the numeric part increases when -there are major system changes, whereas changes in the letter represent -bug-fixes and minor enhancements. - \section{Source Tree Structure} -A source tree (and the master ``{\tt rcs}'' tree) has subdirectories for each -major subsystem: +The CMUCL source tree has subdirectories for each major subsystem: + \begin{description} \item[{\tt assembly/}] Holds the CMU CL source-file assembler, and has machine specific subdirectories holding assembly code for that architecture. @@ -130,7 +23,7 @@ subdirectory holds code that is shared across most backends. \item[{\tt lisp/}] The C runtime system code and low-level Lisp debugger. -\item[{\tt pcl/}] CMU version of the PCL implementation of CLOS. +\item[{\tt pcl/}] CMUCL version of the PCL implementation of CLOS. \item[{\tt tools/}] System building command files and source management tools. \end{description} diff --git a/docs/internals/compiler-overview.tex b/docs/internals/compiler-overview.tex index d7ad5fc22..d173f9427 100644 --- a/docs/internals/compiler-overview.tex +++ b/docs/internals/compiler-overview.tex @@ -396,8 +396,8 @@ We represent CATCH using the lexical exit mechanism. We do a transformation like this: \begin{verbatim} (catch 'foo xxx) ==> - (block \#:foo - (%catch \#'(lambda () (return-from \#:foo (%unknown-values))) 'foo) + (block #:foo + (%catch #'(lambda () (return-from #:foo (%unknown-values))) 'foo) (%within-cleanup :catch xxx)) \end{verbatim} @@ -417,19 +417,21 @@ code for it. We use a similar hack in Unwind-Protect to represent the fact that the cleanup forms can be invoked at arbitrarily random times. + \begin{verbatim} (unwind-protect p c) ==> - (flet ((\#:cleanup () c)) - (block \#:return + (flet ((#:cleanup () c)) + (block #:return (multiple-value-bind - (\#:next \#:start \#:count) - (block \#:unwind - (\%unwind-protect \#'(lambda (x) (return-from \#:unwind x))) - (\%within-cleanup :unwind-protect - (return-from \#:return p))) - (\#:cleanup) - (\%continue-unwind \#:next \#:start \#:count)))) + (#:next #:start #:count) + (block #:unwind + (%unwind-protect #'(lambda (x) (return-from #:unwind x))) + (%within-cleanup :unwind-protect + (return-from #:return p))) + (#:cleanup) + (%continue-unwind #:next #:start #:count)))) \end{verbatim} + We use the block \#:unwind to represent the entry to cleanup code in the case where we are non-locally unwound. Calling of the cleanup function in the drop-through case (or any local exit) is handled by cleanup generation. We diff --git a/docs/internals/debugger.tex b/docs/internals/debugger.tex index b39378ff0..e6a495e16 100644 --- a/docs/internals/debugger.tex +++ b/docs/internals/debugger.tex @@ -1,6 +1,5 @@ % -*- Dictionary: design; Package: C -*- -\#| \chapter{Debugger Information} \index{debugger information} \label{debug-info} @@ -35,7 +34,7 @@ would be used by the debugger, and also could be used by purify to delete parts of the debug-info even when the compiler dumps it in crunched form. [Note that this isn't terribly important if purify is smart about debug-info...] -|\# + Compiled source map representation: @@ -79,6 +78,7 @@ are for subforms of the specified form. Assume a byte vector with a standard variable-length integer format, something like this: + \begin{verbatim} 0..253 => the integer 254 => read next two bytes for integer @@ -87,6 +87,7 @@ like this: Then a compiled debug block is just a sequence of variable-length integers in a particular order, something like this: + \begin{verbatim} number of successors ...offsets of each successor in the function's blocks vector... @@ -96,6 +97,7 @@ particular order, something like this: first live mask (length in bytes determined by number of VARIABLES) ...more <PC, top-level form offset, form-number, live-set> tuples... \end{verbatim} + We determine the number of locations recorded in a block by finding the start of the next compiled debug block in the blocks vector. @@ -226,7 +228,7 @@ the error should look the same to the debugger (or at least similar). -;;;; Debugger interface: +\subsection{Debugger Interface} How does the debugger interface to the "evaluator" (where the evaluator means all of native code, byte-code and interpreted IR1)? It seems that it would be @@ -288,7 +290,8 @@ Block: \end{verbatim} -Variable maps: + +\subsection{Variable maps} There are about five things that the debugger might want to know about a variable: @@ -426,7 +429,7 @@ same-name variables unique all by itself. -Stack parsing: +\subsection{Stack parsing} [\#\#\# Probably not worth trying to make the stack parseable from the bottom up. There are too many complications when we start having variable sized stuff on diff --git a/docs/internals/design.tex b/docs/internals/design.tex index bfce4b533..3c84be194 100644 --- a/docs/internals/design.tex +++ b/docs/internals/design.tex @@ -1,8 +1,10 @@ %%\documentstyle[cmu-titlepage]{report} % -*- Dictionary: design -*- %\documentstyle{report} % -*- Dictionary: design -*- + \documentclass{report} \usepackage{ifthen} \usepackage{calc} +\usepackage{palatino} \usepackage[hyperindex=false,colorlinks=false,urlcolor=blue]{hyperref} % define a new conditional statement which allows us to include @@ -18,13 +20,13 @@ \title{Design of CMU Common Lisp} -\date{January 3, 2000} +\date{January 15, 2003} \author{Robert A. MacLachlan (ed)} \ifpdf \pdfinfo{ /Author (Robert A. MacLachlan, ed) -/Title (CMUCL User's Manual) +/Title (Design of CMU Common Lisp) } % Add section numbers to the bookmarks, and open 2 levels by default. \hypersetup{bookmarksnumbered=true, @@ -82,7 +84,10 @@ \maketitle \abstract{This report documents internal details of the CMU Common Lisp compiler and run-time system. CMU Common Lisp is a public domain -implementation of Common Lisp that runs on various Unix workstations.} +implementation of Common Lisp that runs on various Unix workstations. +This document is a work in progress: neither the contents nor the +presentation are completed. Nevertheless, it provides some useful +background information, in particular regarding the CMUCL compiler.} \tableofcontents \include{architecture} diff --git a/docs/internals/environment.tex b/docs/internals/environment.tex index e46f48f8e..43cd32b64 100644 --- a/docs/internals/environment.tex +++ b/docs/internals/environment.tex @@ -1,3 +1,23 @@ \chapter{The Type System} + + \chapter{The Info Database} + +The info database provides a functional interface to global +information about named things in CMUCL. Information is considered to +be global if it must persist between invocations of the compiler. The +use of a functional interface eliminates the need for the compiler to +worry about the details of the representation. The info database also +handles the need to multiple ``global'' environments, which makes it +possible to change something in the compiler without trashing the +running Lisp environment. + +The info database contains arbitrary lisp values, addressed by a +combination of name, class and type. The Name is an EQUAL-thing which +is the name of the thing that we are recording information about. +Class is the kind of object involved: typical classes are Function, +Variable, Type. A type names a particular piece of information within +a given class. Class and Type are symbols, but are compared with +STRING=. + diff --git a/docs/internals/fasl.tex b/docs/internals/fasl.tex index ef743cd43..2445c3ae1 100644 --- a/docs/internals/fasl.tex +++ b/docs/internals/fasl.tex @@ -4,7 +4,7 @@ The purpose of Fasload files is to allow concise storage and rapid loading of Lisp data, particularly function definitions. The intent is that loading a Fasload file has the same effect as loading the -ASCII file from which the Fasload file was compiled, but accomplishes +source file from which the Fasload file was compiled, but accomplishes the tasks more efficiently. One noticeable difference, of course, is that function definitions may be in compiled form rather than S-expression form. Another is that Fasload files may specify in what @@ -67,7 +67,7 @@ by one or more bytes of all ones \verb|#xFF|; this is called the of necessity begins with a byte other than \verb|#xFF|. The body is terminated by the operation {\tt FOP-END-GROUP}. -The first nine characters of the header must be "{\tt FASL FILE}" in +The first nine characters of the header must be \verb|FASL FILE| in upper-case letters. The rest may be any ASCII text, but by convention it is formatted in a certain way. The header is divided into lines, which are grouped into paragraphs. A paragraph begins diff --git a/docs/internals/front.tex b/docs/internals/front.tex index 458d65c68..1741c4159 100644 --- a/docs/internals/front.tex +++ b/docs/internals/front.tex @@ -1,7 +1,6 @@ \chapter{ICR conversion} % -*- Dictionary: design -*- - \section{Canonical forms} \#| @@ -26,16 +25,18 @@ sees one operand is a FIXNUM, it transforms to EQ, but the generator for EQ isn't expecting numbers, so it doesn't use an immediate compare. -Array hackery: - +\subsection{Array hackery} -Array type tests are transformed to %array-typep, separation of the +Array type tests are transformed to \verb|%array-typep|, separation of the implementation-dependent array-type handling. This way we can transform STRINGP to: + +\begin{verbatim} (or (simple-string-p x) (and (complex-array-p x) (= (array-rank x) 1) - (simple-string-p (%array-data x)))) + (simple-string-p (%array-data x)))) +\end{verbatim} In addition to the similar bit-vector-p, we also handle vectorp and any type tests on which the a dimension isn't wild. @@ -46,13 +47,14 @@ These changes combine to convert hairy type checks into hairy typep's, and then convert hairyp typeps into simple typeps. -Do we really need non-VOP templates? It seems that we could get the desired -effect through implementation-dependent ICR transforms. The main risk would be -of obscuring the type semantics of the code. We could fairly easily retain all -the type information present at the time the tranform is run, but if we -discover new type information, then it won't be propagated unless the VM also -supplies type inference methods for its internal frobs (precluding the use of -%PRIMITIVE, since primitives don't have derive-type methods.) +Do we really need non-VOP templates? It seems that we could get the +desired effect through implementation-dependent ICR transforms. The +main risk would be of obscuring the type semantics of the code. We +could fairly easily retain all the type information present at the +time the tranform is run, but if we discover new type information, +then it won't be propagated unless the VM also supplies type inference +methods for its internal frobs (precluding the use of +\verb|%PRIMITIVE|, since primitives don't have derive-type methods.) I guess one possibility would be to have the call still considered "known" even though it has been transformed. But this doesn't work, since we start doing @@ -62,21 +64,26 @@ LET optimizations that trash the arglist once the call has been transformed Actually, I guess the overhead for providing type inference methods for the internal frobs isn't that great, since we can usually borrow the inference method for a Common Lisp function. For example, in our AREF case: + +\begin{verbatim} (aref x y) ==> - (let ((\#:len (array-dimension x 0))) - (%unchecked-aref x (%check-in-bounds y \#:len))) + (let ((#:len (array-dimension x 0))) + (%unchecked-aref x (%check-in-bounds y #:len))) +\end{verbatim} -Now in this case, if we made %UNCHECKED-AREF have the same derive-type method -as AREF, then if we discovered something new about X's element type, we could -derive a new type for the entire expression. +Now in this case, if we made \verb|%UNCHECKED-AREF| have the same +derive-type method as AREF, then if we discovered something new about +X's element type, we could derive a new type for the entire +expression. -Actually, it seems that baring this detail at the ICR level is beneficial, -since it admits the possibly of optimizing away the bounds check using type -information. If we discover X's dimensions, then \#:LEN becomes a constant that -can be substituted. Then %CHECK-IN-BOUNDS can notice that the bound is -constant and check it against the type for Y. If Y is known to be in range, -then we can optimize away the bounds check. +Actually, it seems that baring this detail at the ICR level is +beneficial, since it admits the possibly of optimizing away the bounds +check using type information. If we discover X's dimensions, then +\verb|#:LEN| becomes a constant that can be substituted. Then +\verb|%CHECK-IN-BOUNDS| can notice that the bound is constant and +check it against the type for Y. If Y is known to be in range, then we +can optimize away the bounds check. Actually in this particular case, the best thing to do would be if we discovered the bound is constant, then replace the bounds check with an @@ -124,6 +131,7 @@ Endp ==> (NULL (THE LIST ...)) (typep x '<simple type>) => (<simple predicate> x) (typep x '<complex type>) => ...composition of simpler operations... \end{verbatim} + TYPEP of AND, OR and NOT types turned into conditionals over multiple TYPEP calls. This makes hairy TYPEP calls more digestible to type constraint propagation, and also means that the TYPEP code generators don't have to deal @@ -406,14 +414,18 @@ Of course, this transformation also simplifies the ICR even when it doesn't discover interesting type assertions, so it makes sense to do it whenever possible. This reduces the demands placed on register allocation, etc. -|\# There are three dead-code flushing rules: - 1] Refs with no DEST may be flushed. - 2] Known calls with no dest that are flushable may be flushed. We null the - DEST in all the args. - 3] If a lambda-var has no refs, then it may be deleted. The flushed argument - continuations have their DEST nulled. + +\begin{enumerate} +\item Refs with no DEST may be flushed. + +\item Known calls with no dest that are flushable may be flushed. We null the +DEST in all the args. + +\item If a lambda-var has no refs, then it may be deleted. The flushed + argument continuations have their DEST nulled. +\end{enumerate} These optimizations all enable one another. We scan blocks backward, looking for nodes whose CONT has no DEST, then type-dispatching off of the node. If we @@ -485,10 +497,13 @@ termination only happens 2\% of the time. \section{Flow graph simplification} Things done: - Delete blocks with no predecessors. - Merge blocks that can be merged. - Convert local calls to Let calls. - Eliminate degenerate IFs. + +\begin{itemize} +\item Delete blocks with no predecessors. +\item Merge blocks that can be merged. +\item Convert local calls to Let calls. +\item Eliminate degenerate IFs. +\end{itemize} We take care not to merge blocks that are in different functions or have different cleanups. This guarantees that non-local exits are always at block @@ -504,6 +519,7 @@ between predecessors. IFs with identical branches would eventually be left with nothing in their branches.] We eliminate IF-IF constructs: + \begin{verbatim} (IF (IF A B C) D E) ==> (IF A (IF B D E) (IF C D E)) @@ -566,7 +582,7 @@ We use type info from the function continuation to find result types for functions that don't have a derive-type method. -ICR transformation: +\subsection{ICR transformation} ICR transformation does "source to source" transformations on known global functions, taking advantage of semantic information such as argument types and @@ -614,17 +630,15 @@ In the backward pass, we scan each block in reverse order, and eliminate any effectless nodes with unused values. In ICR this is the only way that code is deleted other than the elimination of unreachable blocks. - -\chapter{Type checking} -[\#\#\# Somehow split this section up into three parts: - -- Conceptual: how we know a check is necessary, and who is responsible for - doing checks. - -- Incremental: intersection of derived and asserted types, checking for - non-subtype relationship. - -- Check generation phase. -] +\chapter{Type checking} +% Somehow split this section up into three parts: +% -- Conceptual: how we know a check is necessary, and who is responsible for +% doing checks. +% -- Incremental: intersection of derived and asserted types, checking for +% non-subtype relationship. +% -- Check generation phase. We need to do a pretty good job of guessing when a type check will ultimately need to be done. Generic arithmetic, for example: In the absence of @@ -654,6 +668,7 @@ If after ICR phases, we have a continuation with check-type set in a context where it seems likely a check will be emitted, and the type is too hairy to be easily checked (i.e. no CHECK-xxx VOP), then we do a transformation on the ICR equivalent to: + \begin{verbatim} (... (the hair <foo>) ...) ==> @@ -725,7 +740,6 @@ arguments. These arguments will be marked as needing to be checked. \chapter{Constraint propagation} -\#| New lambda-var-slot: constraints: a list of all the constraints on this var for either X or Y. @@ -798,15 +812,16 @@ isn't really that great, and the cost should be small compared to that of the flow analysis that we are preparing to do. [Or we could punt on set variables...] -A type constraint is a structure that includes sset-element and has the type -and variable. -[\#\#\# Also a not-p flag indicating whether the sense is negated.] - Each variable has a list of its type constraints. We create a -type constraint when we see a type test or check. If there is already a -constraint for the same variable and type, then we just re-use it. If there is -already a weaker constraint, then we generate both the weak constraints and the -strong constraint so that the weak constraints won't be lost even if the strong -one is unavailable. +A type constraint is a structure that includes sset-element and has +the type and variable. [Also a not-p flag indicating whether the sense +is negated.] + +Each variable has a list of its type constraints. We create a type +constraint when we see a type test or check. If there is already a +constraint for the same variable and type, then we just re-use it. If +there is already a weaker constraint, then we generate both the weak +constraints and the strong constraint so that the weak constraints +won't be lost even if the strong one is unavailable. We find all the distinct type constraints for each variable during the pre-pass over the lambda nesting. Each constraint has a list of the weaker constraints @@ -863,9 +878,10 @@ We check each newly defined global function for compatibility with previously recorded type information. If there is no :defined or :declared type, then we check for compatibility with any approximate function type inferred from previous uses. - + + + \chapter{Environment analysis} -\#| A related change would be to annotate ICR with information about tail-recursion relations. What we would do is add a slot to the node structure that points to diff --git a/docs/internals/interface.tex b/docs/internals/interface.tex index 8a036452f..2a192d89d 100644 --- a/docs/internals/interface.tex +++ b/docs/internals/interface.tex @@ -1,4 +1,4 @@ -\chapter{User Interface} +\chapter{User Interface of the Compiler} \section{Error Message Utilities} diff --git a/docs/internals/interpreter.tex b/docs/internals/interpreter.tex index e556f7b6b..1df18fad3 100644 --- a/docs/internals/interpreter.tex +++ b/docs/internals/interpreter.tex @@ -1,5 +1,7 @@ % -*- Dictionary: design; Package: C -*- +\chapter{The IR1 Interpreter} + May be worth having a byte-code representation for interpreted code. This way, an entire system could be compiled into byte-code for debugging (the "check-out" compiler?). @@ -8,8 +10,6 @@ Given our current inclination for using a stack machine to interpret IR1, it would be straightforward to layer a byte-code interpreter on top of this. -Interpreter: - Instead of having no interpreter, or a more-or-less conventional interpreter, or byte-code interpreter, how about directly executing IR1? @@ -32,11 +32,13 @@ single cell. The compiler can have some special frobs for making the interpreter efficient, such as a call operation that extracts arguments from the stack slots designated by a continuation list. Perhaps + \begin{verbatim} (values-mapcar fun . lists) <==> (values-list (mapcar fun . lists)) \end{verbatim} + This would be used with MV-CALL. @@ -180,14 +182,15 @@ but it would be much easier to use. [It would be impossible for an evalhook stepper to do this.] -%PRIMITIVE usage: +\section{Use of \%PRIMITIVE} -Note: %PRIMITIVE can only be used in compiled code. It is a trapdoor into the -compiler, not a general syntax for accessing "sub-primitives". It's main use -is in implementation-dependent compiler transforms. It saves us the effort of -defining a "phony function" (that is not really defined), and also allows -direct communication with the code generator through codegen-info arguments. +Note: \verb|%PRIMITIVE| can only be used in compiled code. It is a +trapdoor into the compiler, not a general syntax for accessing +"sub-primitives". It's main use is in implementation-dependent +compiler transforms. It saves us the effort of defining a "phony +function" (that is not really defined), and also allows direct +communication with the code generator through codegen-info arguments. -Some primitives may be exported from the VM so that %PRIMITIVE can be used to -make it explicit that an escape routine or interpreter stub is assuming an -operation is implemented by the compiler. +Some primitives may be exported from the VM so that \verb|%PRIMITIVE| +can be used to make it explicit that an escape routine or interpreter +stub is assuming an operation is implemented by the compiler. diff --git a/docs/internals/lowlev.tex b/docs/internals/lowlev.tex index 7e6f13f35..68544fdca 100644 --- a/docs/internals/lowlev.tex +++ b/docs/internals/lowlev.tex @@ -1,10 +1,85 @@ \chapter{Memory Management} + \section{Stacks and Globals} + \section{Heap Layout} + \section{Garbage Collection} \chapter{Interface to C and Assembler} + +\section{Linkage Table} + +The linkage table feature is based on how dynamic libraries dispatch. +A table of functions is used which is filled in with the appropriate +code to jump to the correct address. + +For CMUCL, this table is stored at +\verb|target-foreign-linkage-space-start|. Each entry is +\verb|target-foreign-linkage-entry-size| bytes long. + +At startup, the table is initialized with default values in +\verb|os_foreign_linkage_init|. On x86 platforms, the first entry is +code to call the routine \verb|resolve_linkage_tramp|. All other +entries jump to the first entry. The function +\verb|resolve_linkage_tramp| looks at where it was called from to +figure out which entry in the table was used. It calls +\verb|lazy_resolve_linkage| with the address of the linkage entry. +This routine then fills in the appropriate linkage entry with code to +jump to where the real routine is located, and returns the address of +the entry. On return, \verb|resolve_linkage_tramp| then just jumps to +the returned address to call the desired function. On all subsequent +calls, the entry no longer points to \verb|resolve_linkage_tramp| but +to the real function. + +This describes how function calls are made. For foreign data, +\verb|lazy_resolve_linkage| stuffs the address of the actual foreign +data into the linkage table. The lisp code then just loads the address +from there to get the actual address of the foreign data. + +For sparc, the linkage table is slightly different. The first entry is +the entry for \verb|call_into_c| so we never have to look this up. All +other entries are for \verb|resolve_linkage_tramp|. This has the +advantage that \verb|resolve_linkage_tramp| can be much simpler since +all calls to foreign code go through \verb|call_into_c| anyway, and +that means all live Lisp registers have already been saved. Also, to +make life simpler, we lie about \verb|closure_tramp| and +\verb|undefined_tramp| in the Lisp code. These are really functions, +but we treat them as foreign data since these two routines are only +used as addresses in the Lisp code to stuff into a lisp function +header. + +On the Lisp side, there are two supporting data structures for the +linkage table: \verb|*linkage-table-data*| and +\verb|*foreign-linkage-symbols*|. The latter is a hash table whose key +is the foriegn symbol (a string) and whose value is an index into +\verb|*linkage-table-data*|. + +\verb|*linkage-table-data*| is a vector with an unlispy layout. Each +entry has 3 parts: + +\begin{itemize} +\item symbol name +\item type, a fixnum, 1 = code, 2 = data +\item library list - the library list at the time the symbol is registered. +\end{itemize} + +Whenever a new foreign symbol is defined, a new +\verb|*linkage-table-data*| entry is created. +\verb|*foreign-linkage-symbols*| is updated with the symbol and the +entry number into \verb|*linkage-table-data*|. + +The \verb|*linkage-table-data*| is accessed from C (hence the unlispy +layout), to figure out the symbol name and the type so that the +address of the symbol can be determined. The type tells the C code +how to fill in the entry in the linkage-table itself. + +% (Should say something about genesis too, but I don't know how that +% works other than the initial table is setup with the apropriate first +% entry.) + + \chapter{Low-level debugging} \chapter{Core File Format} diff --git a/docs/internals/middle.tex b/docs/internals/middle.tex index 29addbdb6..857ef0216 100644 --- a/docs/internals/middle.tex +++ b/docs/internals/middle.tex @@ -6,15 +6,19 @@ \chapter{Global TN assignment} -[\#\#\# Rename this phase so as not to be confused with the local/global TN -representation.] +% Rename this phase so as not to be confused with the local/global TN +% representation. The basic mechanism for closing over values is to pass the values as additional implicit arguments in the function call. This technique is only applicable when: - -- the calling function knows which values the called function wants to close + +\begin{itemize} +\item the calling function knows which values the called function wants to close over, and - -- the values to be closed over are available in the calling environment. +\item the values to be closed over are available in the calling + environment. +\end{itemize} The first condition is always true of local function calls. Environment analysis can guarantee that the second condition holds by closing over any @@ -247,10 +251,15 @@ totally linearize the code here, allowing code generation to scan the blocks in the emit order. There are basically two aspects to this optimization: - 1] Dynamically reducing the number of branches taken v.s. branches not - taken under the assumption that branches not taken are cheaper. - 2] Statically minimizing the number of unconditional branches, saving space - and presumably time. + +\begin{enumerate} +\item +Dynamically reducing the number of branches taken v.s. branches not +taken under the assumption that branches not taken are cheaper. +\item +Statically minimizing the number of unconditional branches, saving +space and presumably time. +\end{enumerate} These two goals can conflict, but if they do it seems pretty clear that the dynamic optimization should get preference. The main dynamic optimization is @@ -325,7 +334,11 @@ psetq, etc., since it would fail when one of the new values is random code Is this really a general problem with eager type checking? It seems you could argue that there was no type error in this code: - (+ :foo (throw 'up nil)) + +\begin{verbatim} + (+ :foo (throw 'up nil)) +\end{verbatim} + But we would signal an error. @@ -339,7 +352,7 @@ At continuation use time, we may in general have to do both a coerce-to-t and a type check, allocating two temporary TNs to hold the intermediate results. -VMR Control representation: +\section{VMR Control representation} We represent all control transfer explicitly. In particular, :Conditional VOPs take a single Target continuation and a Not-P flag indicating whether the sense @@ -641,11 +654,12 @@ since THROW will use truly unknown values.] On entry to a dynamic extent that has non-local-exists into it (always at an ENTRY node), we take a complete snapshot of the dynamic state: -\begin{verbatim} - the top pointers for all stacks - current Catch and Unwind-Protect - current special binding (binding stack pointer in shallow binding) -\end{verbatim} + +\begin{itemize} +\item the top pointers for all stacks +\item current Catch and Unwind-Protect +\item current special binding (binding stack pointer in shallow binding) +\end{itemize} We insert code at the re-entry point which restores the saved dynamic state. All TNs live at an NLX EP are forced onto the stack, so we don't have to restore diff --git a/docs/internals/object.tex b/docs/internals/object.tex index 043cabd18..0ded50815 100644 --- a/docs/internals/object.tex +++ b/docs/internals/object.tex @@ -680,10 +680,10 @@ return PC and the constant pool. \section{Memory Layout} -CMU Common Lisp has four spaces, read-only, static, dynamic-0, and dynamic-1. +\cmucl{} has four spaces, read-only, static, dynamic-0, and dynamic-1. Read-only contains objects that the system never modifies, moves, or reclaims. Static space contains some global objects necessary for the system's runtime or -performance (since they are located at a known offset at a know address), and +performance (since they are located at a known offset at a known address), and the system never moves or reclaims these. However, GC does need to scan static space for references to moved objects. Dynamic-0 and dynamic-1 are the two heap areas for stop-and-copy GC algorithms. diff --git a/docs/internals/run-time.tex b/docs/internals/run-time.tex index 72250c5ec..4499cbf26 100644 --- a/docs/internals/run-time.tex +++ b/docs/internals/run-time.tex @@ -1,4 +1,5 @@ -\part{Run-Time system} +\part{Run-Time System} + \input{environment} \input{interpreter} \input{debugger} diff --git a/docs/internals/vm.tex b/docs/internals/vm.tex index e36dcffe0..ffaf55c52 100644 --- a/docs/internals/vm.tex +++ b/docs/internals/vm.tex @@ -1,11 +1,11 @@ \chapter{Introduction} % -*- Dictionary: design -*- -(defun gvp (f) - (with-open-file (s f :direction :output :if-exists :supersede) - (maphash \#'(lambda (k v) - (declare (ignore v)) - (format s "~A~%" k)) - (c::backend-template-names c::*backend*)))) +% (defun gvp (f) +% (with-open-file (s f :direction :output :if-exists :supersede) +% (maphash \#'(lambda (k v) +% (declare (ignore v)) +% (format s "~A~%" k)) +% (c::backend-template-names c::*backend*)))) \section{Scope and Purpose} -- GitLab