From aa4d784c0bcf32183d15c56e5d00f2fc69c86856 Mon Sep 17 00:00:00 2001
From: cracauer <cracauer>
Date: Tue, 14 Jan 2003 21:02:07 +0000
Subject: [PATCH] Typo fixes by Ed Wang (thanks, Ed!).

---
 docs/internals/architecture.tex      |  8 +++----
 docs/internals/back.tex              | 21 +++++++++--------
 docs/internals/compiler-overview.tex |  8 +++----
 docs/internals/debugger.tex          | 34 ++++++++++++++++++----------
 docs/internals/front.tex             | 25 ++++++++++++--------
 docs/internals/glossary.tex          |  8 +++----
 docs/internals/interpreter.tex       |  2 +-
 docs/internals/middle.tex            | 30 +++++++++++++-----------
 docs/internals/object.tex            |  6 ++---
 docs/internals/retargeting.tex       | 30 ++++++++++++------------
 10 files changed, 99 insertions(+), 73 deletions(-)

diff --git a/docs/internals/architecture.tex b/docs/internals/architecture.tex
index 40b3c4057..747cc8057 100644
--- a/docs/internals/architecture.tex
+++ b/docs/internals/architecture.tex
@@ -229,8 +229,8 @@ have to edit them before they will work.}
 \section{Compiling the C Startup Code}
 
 There is a circular dependancy between lisp/internals.h and lisp/lisp.map that
-causes bootstrapping problems.  To the easiest way to get around this problem
-is to make a fake lisp.nm file that has nothing in it by a version number:
+causes bootstrapping problems.  The easiest way to get around this problem
+is to make a fake lisp.nm file that has nothing in it but a version number:
 
 \begin{verbatim}
 	% echo "Map file for lisp version 0" > lisp.nm
@@ -248,7 +248,7 @@ compile lisp producing a correct lisp.map:
 \begin{verbatim}
 	% make
 \end{verbatim}
-and the use \verb|tools/do-worldbuild| and \verb|tools/mk-lisp| to build
+and then use \verb|tools/do-worldbuild| and \verb|tools/mk-lisp| to build
 \verb|kernel.core| and \verb|lisp.core| (see section \ref[building-cores].)
 
 \section{Compiling the Lisp Code}
@@ -263,7 +263,7 @@ subsystem.  Error output is generated in files with ``{\tt .log}'' extension in
 the root of the build area.
 
 \item[setup.lisp] Some lisp utilities used for compiling changed files in batch
-mode and collecting the error output Sort of a crude defsystem.  Loads into the
+mode and collecting the error output. Sort of a crude defsystem.  Loads into the
 ``user'' package.  See {\tt with-compiler-log-file} and {\tt comf}.
 
 \item[{\it foo}com.lisp] Each system has a ``\verb|.lisp|'' file in
diff --git a/docs/internals/back.tex b/docs/internals/back.tex
index 9d11f8dc3..c8cfe27f5 100644
--- a/docs/internals/back.tex
+++ b/docs/internals/back.tex
@@ -40,7 +40,7 @@ local TN assignment pass before this, since we allocate TNs afterward, so we do
 a pre-pass that marks the TNs that are local for our purposes.  We don't care
 if block splitting eventually causes some of them to be considered global.
 
-Note also that we are really only are interested in knowing if there is a
+Note also that we are really only interested in knowing if there is a
 unique reaching definition, which we can mash into our flow analysis rules by
 doing an intersection.  Then a definition only appears in the set when it is
 unique.  We then propagate only definitions of TNs with only one write, which
@@ -118,7 +118,7 @@ inter-block lifetime information.  The pre-pass creates all the
 global-conflicts for blocks that global TNs are referenced in.  The flow
 analysis pass just adds always-live global-conflicts for the other blocks the
 TNs are live in.  In addition to possibly being more efficient than SSets, this
-would directly result in the desired global-conflicts information, rather that
+would directly result in the desired global-conflicts information, rather than
 having to create it from another representation.
 
 The DFO sorted per-TN global-conflicts thread suggests some kind of algorithm
@@ -320,7 +320,7 @@ somebody can copy conflict info from the saved TN.
 
 Note that having block granularity in the conflict information doesn't mean
 that a localized packing scheme would have to do all moves at block boundaries
-(which would clash with the desire the have saving done as part of this
+(which would clash with the desire to have saving done as part of this
 mechanism.)  All that it means is that if we want to do a move within the
 block, we would need to allocate both locations throughout that block (or
 something).
@@ -361,10 +361,10 @@ We assume all locations can be used when an sc is based on an unbounded sb.
 ]
 
 
-TN-Refs are be convenient structures to build the target graph out of.  If we
+TN-Refs are convenient structures to build the target graph out of.  If we
 allocated space in every TN-Ref, then there would certainly be enough to
 represent arbitrary target graphs.  Would it be enough to allocate a single
-Target slot?  If there is a target path though a given VOP, then the Target of
+Target slot?  If there is a target path through a given VOP, then the Target of
 the write ref would be the read, and vice-versa.  To find all the TNs that
 target us, we look at the TN for the target of all our write refs.
 
@@ -486,8 +486,7 @@ implemented at this point.
 
 This means that Pack can pack all TNs simultaneously, using one data structure
 to represent the conflicts for each location.  So we have only one conflict set
-per SB location, rather than separating this information by environment
-environment.
+per SB location, rather than separating this information by environment.
 
 
 Load TN packing:
@@ -505,7 +504,7 @@ In many cases we will be able to pack the load TN with no hassle, but in
 general we may need to spill a TN that has already been packed.  We choose a
 TN that isn't in use by the offending VOP, and then spill that TN onto the
 stack for the duration of that VOP.  If the VOP is a conditional, then we must
-insert a new block interposed before the branch target so that the value TN
+insert a new block interposed before the branch target so that the TN
 value is restored regardless of which branch is taken.
 
 Instead of remembering lifetime information from conflict analysis, we rederive
@@ -572,6 +571,7 @@ far as the assembler is concerned, an instruction is a bit sequence that is
 broken down into subsequences.  Some of the subsequences are constant in value,
 while others can be determined at assemble or load time.
 
+\begin{verbatim}
 Assemble Node Form*
     Allow instructions to be emitted during the evaluation of the Forms by
     defining Inst as a local macro.  This macro caches various global
@@ -591,6 +591,7 @@ Gen-Label
 Emit-Label (Label)
     Gen-Label returns a Label object, which describes a place in the code.
     Emit-Label marks the current position as being the location of Label.
+\end{verbatim}
 
 
 
@@ -616,6 +617,7 @@ dumper:
 Fasl dumper and in-core loader are implementation (but not instruction set)
 dependent, so we want to give them a clear interface.
 
+\begin{verbatim}
 open-fasl-file name => fasl-file
     Returns a "fasl-file" object representing all state needed by the dumper.
     We objectify the state, since the fasdumper should be reentrant.  (but
@@ -636,6 +638,7 @@ load-component component code-vector length fixups
     Like Fasl-Dump-Component, but directly installs the code in core, running
     any top-level code immediately.  (???) but we need some way to glue
     together the componenents, since we don't have a fasl table.
+\end{verbatim}
 
 
 
@@ -647,7 +650,7 @@ the table.
 
 We have to grovel the constants for each component after compiling that
 component so that we can fix up load-time constants.  Load-time constants are
-values needed my the code that are computed after code generation/assembly
+values needed by the code that are computed after code generation/assembly
 time.  Since the code is fixed at this point, load-time constants are always
 represented as non-immediate constants in the constant pool.  A load-time
 constant is distinguished by being a cons (Kind . What), instead of a Constant
diff --git a/docs/internals/compiler-overview.tex b/docs/internals/compiler-overview.tex
index 036f08022..d7ad5fc22 100644
--- a/docs/internals/compiler-overview.tex
+++ b/docs/internals/compiler-overview.tex
@@ -133,7 +133,7 @@ with delayed branch instructions, locate instructions that can be moved into
 delay slots.  Files: {\tt assem-opt}
 
 \item[Assembly]
-Resolve branches and convert in to object code and fixup information.
+Resolve branches and convert into object code and fixup information.
 Files: {\tt assembler}
 
 \item[Dumping] Convert the compiled code into an object file or in-core
@@ -310,7 +310,7 @@ life simpler anyway, since this breaks the potential circularity of the
 Tail-Info-Type will affecting the Continuation-Derived-Type, which affects...
 
 When a given return has no non-call uses, we represent this by using
-*empty-type*.  This consistent with the interpretation that a return type of
+*empty-type*.  This is consistent with the interpretation that a return type of
 NIL means the function can't return.
 
 
@@ -384,7 +384,7 @@ The Entry node marks the beginning of a block or tagbody:
       (continuations nil :type list)) 
 \end{verbatim}
 It contains a list of all the continuations that the body could exit to.  The
-Entry node is used as a marker for the the place to snapshot state, including
+Entry node is used as a marker for the place to snapshot state, including
 the control stack pointer.  Each lambda has a list of its Entries so
 that environment analysis can figure out which continuations are really being
 closed over.  There is no reason for optimization to delete Entry nodes,
@@ -402,7 +402,7 @@ like this:
        xxx))
 \end{verbatim}
 
-\%CATCH just sets up the catch frame which points to the exit function.  %Catch
+\%CATCH just sets up the catch frame which points to the exit function.  \%Catch
 is an ordinary function as far as ICR is concerned.  The fact that the catcher
 needs to be cleaned up is expressed by the Cleanup slots in the continuations
 in the body.  \%UNKNOWN-VALUES is a dummy function call which represents the
diff --git a/docs/internals/debugger.tex b/docs/internals/debugger.tex
index 482baf71f..b39378ff0 100644
--- a/docs/internals/debugger.tex
+++ b/docs/internals/debugger.tex
@@ -72,16 +72,18 @@ determine whether an arbitrary form is a subform of some other form, since the
 form number of B will be \verb+>+ than A's number and \verb+<+ A's next sibling's number iff
 B is a subform of A.  
 
-This should be quite useful for doing the source=>pc mapping in the debugger,
+This should be quite useful for doing the \verb|source=>pc| mapping in the debugger,
 since that problem reduces to finding the subset of the known locations that
 are for subforms of the specified form.
 
 
 Assume a byte vector with a standard variable-length integer format, something
 like this:
+\begin{verbatim}
     0..253 => the integer
     254 => read next two bytes for integer
     255 => read next four bytes for integer
+\end{verbatim}
 
 Then a compiled debug block is just a sequence of variable-length integers in a
 particular order, something like this:
@@ -94,7 +96,7 @@ particular order, something like this:
     first live mask (length in bytes determined by number of VARIABLES)
     ...more <PC, top-level form offset, form-number, live-set> tuples...
 \end{verbatim}
-We determine the number of locations recorded in a block by the finding the
+We determine the number of locations recorded in a block by finding the
 start of the next compiled debug block in the blocks vector.
 
 [\#\#\# Actually, only need 2 bits for number of successors {0,1,2}.  We might
@@ -112,15 +114,18 @@ being interpreted as an index into the Location's alternate locations.]
 It looks like using structures for the compiled-location-info is too bulky.
 Instead we need some packed binary representation.
 
-First, let's represent a SC/offset pair with an "SC-Offset", which is an
+First, let's represent an SC/offset pair with an "SC-Offset", which is an
 integer with the SC in the low 5 bits and the offset in the remaining bits:
+\begin{verbatim}
     ----------------------------------------------------
     | Offset (as many bits as necessary) | SC (5 bits) |
     ----------------------------------------------------
+\end{verbatim}
 Probably the result should be constrained to fit in a fixnum, since it will be
 more efficient and gives more than enough possible offsets.
 
-We can the represent a compiled location like this:
+We can then represent a compiled location like this:
+\begin{verbatim}
     single byte of boolean flags:
 	uninterned name
 	packaged name
@@ -134,12 +139,13 @@ We can the represent a compiled location like this:
     [If has ID, ID as var-length integer]
     SC-Offset of primary location (as var-length integer)
     [If has save SC, SC-Offset of save location (as var-length integer)]
+\end{verbatim}
 
 
 
 
-But for a whizzy breakpoint facility, we would need a good source=>code map.
-Dumping a complete code=>source map might be as good a way as any to represent
+But for a whizzy breakpoint facility, we would need a good \verb+source=>code+ map.
+Dumping a complete \verb+code=>source map+ might be as good a way as any to represent
 this, due to the one-to-many relationship between source and code locations.
 
 We might be able to get away with just storing the source locations for the
@@ -161,7 +167,7 @@ several paths.  This ambiguity might be resolved by picking the shortest path
 or letting the user choose.
 
 At the primitive level, I guess what this means is that the structure of source
-locations (i.e. source paths) must be known, and the source=>code operation
+locations (i.e. source paths) must be known, and the \verb+source=>code+ operation
 should return a list of \verb+<source,code>+ pairs, rather than just a list of code
 locations.  This allows the debugger to resolve the ambiguity however it wants.
 
@@ -236,6 +242,7 @@ of ambiguous matches.  [Actually, it would probably be a good idea to store the
 package if we are going to allow variables to be closed over.]
 
 Some objects we would need:
+\begin{verbatim}
 Location:
 	The constant information about the place where a value is stored,
         everything but which particular frame it is in.  Operations:
@@ -278,6 +285,7 @@ Block:
         block-forms block => (source-location code-location)*
             Return the corresponding source locations and code locations for
             all forms (and form fragments) in the block.
+\end{verbatim}
 
 
 Variable maps:
@@ -317,7 +325,7 @@ variable:
 	both in space and compiler effort, so we will have to settle for some
 	sort of approximation.
 
-	The finest granularity at which it is easy to determine liveness is the
+	The finest granularity at which it is easy to determine liveness is
 	the block: we can regard the variable lifetime as the set of blocks
 	that the variable is live in.  Of course, the variable may be dead (and
 	thus contain meaningless garbage) during arbitrarily large portions of
@@ -332,7 +340,7 @@ The variable map should represent this information space-efficiently and with
 adequate computational efficiency.
 
 The SC and ID can be represented as small integers.  Although the ID can in
-principle be arbitrarily large, it should be <100 in practice.  The location
+principle be arbitrarily large, it should be $<$100 in practice.  The location
 can be represented by just the offset (a moderately small integer), since the
 SB is implicit in the SC.
 
@@ -353,7 +361,7 @@ We could probably save some space by cleverly representing the var-info as
 parallel vectors of different types, but this would be more painful in use.
 It seems better to just use a structure, encoding the unboxed fields in a
 fixnum.  This way, we can pass around the structure in the debugger, perhaps
-even exporting it from the the low-level debugger interface.
+even exporting it from the low-level debugger interface.
 
 [\#\#\# We need the save location too.  This probably means that we need two slots
 of bits, since we need the save offset and save SC.  Actually, we could let the
@@ -376,6 +384,7 @@ continue to be somewhat magical.]
 
 How about:
 
+\begin{verbatim}
 (defstruct var-info
   ;;
   ;; This variable's name. (symbol-name of the symbol)
@@ -392,6 +401,7 @@ How about:
   ;;
   ;; The variable's type, represented as list-style type descriptor.
   type)
+\end{verbatim}
 
 Then the debug-info holds a simple-vector of all the var-info structures for
 that component.  We might as well make it sorted alphabetically by name, so
@@ -482,7 +492,7 @@ in block compilation.
 The implementation is simple: per-environment TNs are flagged by the
 :Environment kind.  :Environment TNs are treated the same as :Normal TNs by
 everyone except for lifetime/conflict analysis.  An environment's TNs are also
-stashed in a list in the IR2-Environment structure.  During during the conflict
+stashed in a list in the IR2-Environment structure.  During the conflict
 analysis post-pass, we look at each block's environment, and make all the
 environment's TNs always-live in that block.
 
@@ -508,7 +518,7 @@ debugger.  In this case, it may be desirable to be able to indicate that only
 partial saving has been done.  For example, we don't want to have to save all
 the FP registers just so that we can use a couple extra general registers.
 
-When when the debugger see an escape frame, it knows that register values are
+When the debugger see an escape frame, it knows that register values are
 located in the escape frame's "register save" area, rather than in the normal
 save locations.
 
diff --git a/docs/internals/front.tex b/docs/internals/front.tex
index 1ede8ec5d..458d65c68 100644
--- a/docs/internals/front.tex
+++ b/docs/internals/front.tex
@@ -6,8 +6,8 @@
 
 \#|
 
-Would be useful to have a Freeze-Type proclamation.  Its primary use would to
-be say that the indicated type won't acquire any new subtypes in the future.
+Would be useful to have a Freeze-Type proclamation.  Its primary use would be
+to say that the indicated type won't acquire any new subtypes in the future.
 This allows better open-coding of structure type predicates, since the possible
 types that would satisfy the predicate will be constant at compile time, and
 thus can be compiled as a skip-chain of EQ tests.  
@@ -131,15 +131,18 @@ with these cases.  [\#\#\# In the case of union types we may want to do somethin
 to preserve information for type constraint propagation.]
 
 
+\begin{verbatim}
     (apply \#'foo a b c)
 ==>
     (multiple-value-call \#'foo (values a) (values b) (values-list c))
+\end{verbatim}
 
 This way only MV-CALL needs to know how to do calls with unknown numbers of
 arguments.  It should be nearly as efficient as a special-case VMR-Convert
 method could be.
 
 
+\begin{verbatim}
 Make-String => Make-Array
 N-arg predicates associated into two-arg versions.
 Associate N-arg arithmetic ops.
@@ -148,6 +151,7 @@ Zerop, Plusp, Minusp, 1+, 1-, Min, Max, Rem, Mod
 (Values x), (Identity x) => (Prog1 x)
 
 All specialized aref functions => (aref (the xxx) ...)
+\end{verbatim}
 
 Convert (ldb (byte ...) ...) into internal frob that takes size and position as
 separate args.  Other byte functions also...
@@ -155,6 +159,7 @@ separate args.  Other byte functions also...
 Change for-value primitive predicates into \verb+(if <pred> t nil)+.  This isn't
 particularly useful during ICR phases, but makes life easy for VMR conversion.
 
+
 This last can't be a source transformation, since a source transform can't tell
 where the form appears.  Instead, ICR conversion special-cases calls to known
 functions with the Predicate attribute by doing the conversion when the
@@ -182,7 +187,7 @@ arguments, etc.)
 ]
 
 We only record a function's inline expansion in the global environment when the
-function is in the null lexical environment, since it the expansion must be
+function is in the null lexical environment, since the expansion must be
 represented as source.
 
 We do inline expansion of functions locally defined by FLET or LABELS even when
@@ -385,7 +390,7 @@ preserve the single-value semantics of the let-binding in this case.
 
 The REF and variable must be deleted as part of this operation, since the ICR
 would otherwise be left in an inconsistent state; we can't wait for the REF to
-be deleted due to bing unused, since we have grabbed the arg continuation and
+be deleted due to being unused, since we have grabbed the arg continuation and
 substituted it into the old DEST.
 
 The big reason for doing this transformation is that in macros such as INCF and
@@ -458,7 +463,7 @@ to tell when a node needs to be reoptimized and does the optimization.  These
 node types are special-cased: COMBINATION, IF, RETURN, EXIT, SET.
 
 The REOPTIMIZE flag in the COMBINATION-FUN is used to detect when the function
-information might have changed, so that we know when where are new assertions
+information might have changed, so that we know when there are new assertions
 that could be propagated from the function type to the arguments.
 
 When we discover something about a leaf, or substitute for leaf, we reoptimize
@@ -474,7 +479,7 @@ termination.  I believe that with the type system implemented, type inference
 will converge in finite time, but as a practical matter, it can take far too
 long to discover not much.  For this reason, ICR optimization is terminated
 after three consecutive passes that don't add or delete code.  This premature
-termination only happens 2% of the time.
+termination only happens 2\% of the time.
 
 
 \section{Flow graph simplification}
@@ -499,8 +504,10 @@ between predecessors.  IFs with identical branches would eventually be left
 with nothing in their branches.]
 
 We eliminate IF-IF constructs:
+\begin{verbatim}
     (IF (IF A B C) D E) ==>
     (IF A (IF B D E) (IF C D E))
+\end{verbatim}
 
 In reality, what we do is replicate blocks containing only an IF node where the
 predicate continuation is the block start.  We make one copy of the IF node for
@@ -621,7 +628,7 @@ only way that code is deleted other than the elimination of unreachable blocks.
 
 We need to do a pretty good job of guessing when a type check will ultimately
 need to be done.  Generic arithmetic, for example: In the absence of
-declarations, we will use use the safe variant, but if we don't know this, we
+declarations, we will use the safe variant, but if we don't know this, we
 will generate a check for NUMBER anyway.  We need to look at the fast-safe
 templates and guess if any of them could apply.
 
@@ -630,7 +637,7 @@ and assertions on those arguments.  This can be used with Valid-Function-Use
 to see which templates do or might apply to a particular call.  If we guess
 that a safe implementation will be used, then we mark the continuation so as to
 force a safe implementation to be chosen.  [This will happen if ICR optimize
-doesn't run to completion, so the icr optimization after type check generation
+doesn't run to completion, so the ICR optimization after type check generation
 can discover new type information.  Since we won't redo type check at that
 point, there could be a call that has applicable unsafe templates, but isn't
 type checkable.]
@@ -754,7 +761,7 @@ It's fairly easy to see how we can build these sets of restrictions and
 propagate them using flow analysis, but actually using this information seems
 a bit more ad-hoc.  
 
-Probably the biggest thing we do is look at all the refs.  If have proven that
+Probably the biggest thing we do is look at all the refs.  If we have proven that
 the value is EQ (EQL for a number) to some other leaf (constant or lambda-var),
 then we can substitute for that reference.  In some cases, we will want to do
 special stuff depending on the DEST.  If the dest is an IF and we proved (not
diff --git a/docs/internals/glossary.tex b/docs/internals/glossary.tex
index 1ed4694fe..1ac8a66e5 100644
--- a/docs/internals/glossary.tex
+++ b/docs/internals/glossary.tex
@@ -309,9 +309,9 @@ constant.  Generally this means that it is a pure function with no side
 effects.
 
 
-FSC
-full call
-function attribute
+\item[FSC]
+\item[full call]
+\item[function attribute]
 function
         "real" (allocates environment)
         meaning function-entry
@@ -406,6 +406,6 @@ value passing
 VAR
 VM
 VOP
-XEP
+\item[XEP]
 
 \end{description}
diff --git a/docs/internals/interpreter.tex b/docs/internals/interpreter.tex
index 0b302dfd7..e556f7b6b 100644
--- a/docs/internals/interpreter.tex
+++ b/docs/internals/interpreter.tex
@@ -100,7 +100,7 @@ between a "normal" return and a non-local one.]
 
 [Note that in any control transfer (normal or otherwise), the stepper may need
 to unwind out of an arbitrary number of levels of stepping.  This is because a
-form in a TR position may yield its to a node arbitrarily far our.]
+form in a TR position may yield its to a node arbitrarily far out.]
 
 Another problem is with deciding what form is being stepped.  When we start
 evaluating a node, we dive into code that is nested somewhere down inside that
diff --git a/docs/internals/middle.tex b/docs/internals/middle.tex
index 7adc01817..29addbdb6 100644
--- a/docs/internals/middle.tex
+++ b/docs/internals/middle.tex
@@ -179,7 +179,7 @@ also done at the same time so that multiple passes aren't necessary.
  
 If safety is more important that speed and space, then we consider generating
 type checks on the values of nodes whose CONT has the Type-Check flag set.  If
-the destinatation for the continuation value is safe, then we don't need to do
+the destination for the continuation value is safe, then we don't need to do
 a check.  We assume that all full calls are safe, and use the template
 information to determine whether inline operations are safe.
 
@@ -395,7 +395,7 @@ mechanism.  This translation is specified by the particular VM definition; VMR
 conversion makes no assumptions about which operations are primitive or what
 operand types are worth special-casing.  The default calling mechanisms and
 other miscellaneous builtin features are implemented using standard VOPs that
-must implemented by each VM.
+must be implemented by each VM.
 
 Type information can be forgotten after VMR conversion, since all type-specific
 operation selections have been made.
@@ -490,8 +490,8 @@ over all the code in the component (not that big a consideration.)
 
 
 \#|
-Actually, what we do is do a backward graph walk from each unknown-values
-receiver.   As we go, we mark each walked block with ther ordered list of
+Actually, what we do is a backward graph walk from each unknown-values
+receiver.   As we go, we mark each walked block with the ordered list of
 continuations we believe are on the stack.  Starting with an empty stack, we:
  -- When we encounter another unknown-values receiver, we push that
     continuation on our simulated stack.
@@ -507,10 +507,10 @@ discard it.]
 
 
 [\#\#\# Also, we can't terminate our walk just because we hit a block previously
-walked.  We have to compare the the End-Stack with the values received along
+walked.  We have to compare the End-Stack with the values received along
 the current path: if we have more values on our current walk than on the walk
 that last touched the block, then we need to re-walk the subgraph reachable
-from from that block, using our larger set of continuations.  It seems that our
+from that block, using our larger set of continuations.  It seems that our
 actual termination condition is reaching a block whose End-Stack is already EQ
 to our current stack.]
 
@@ -519,7 +519,7 @@ to our current stack.]
 
 
 If at the start, the block containing the values receiver has already been
-walked, the we skip the walk for that continuation, since it has already been
+walked, we skip the walk for that continuation, since it has already been
 handled by an enclosing values receiver.  Once a walk has started, we
 ignore any signs of a previous walk, clobbering the old result with our own,
 since we enclose that continuation, and the previous walk doesn't take into
@@ -549,6 +549,7 @@ possibility of blocks being joined.  We could collect some unknown MVs in a
 block, then do a control transfer out of the receiver, and this control
 transfer could be squeezed out by merging blocks.  How about:
 
+\begin{verbatim}
     (tagbody
       (return
        (multiple-value-prog1 (foo)
@@ -559,6 +560,7 @@ transfer could be squeezed out by merging blocks.  How about:
       (return
        (multiple-value-prog1 (baz)
 	 bletch)))
+\end{verbatim}
 
 But the problem doesn't happen here (can't happen in general?) since a node
 buried within a block can't use a continuation outside of the block.  In fact,
@@ -580,7 +582,7 @@ pushes.]
 I believe that above concern with a dead use getting mashed inside a block
 can't happen, since the use inside the block must be the only use, and if the
 use isn't reachable from the push, then the use is totally unreachable, and
-should have been deleted, which would prevent the prevent it from ever being
+should have been deleted, which would prevent it from ever being
 annotated.
 ]
 ]
@@ -595,7 +597,7 @@ is all we need to do the inter-block analysis.
 After we have found out what stuff is on the stack at each block boundary, we
 look for blocks with predecessors that have junk on the stack.  For each such
 block, we introduce a new block containing code to restore the stack pointer.
-Since unknown-values continuations are represented as <start, count>, we can
+Since unknown-values continuations are represented as \verb+<start, count>+, we can
 easily pop a continuation using the Start TN.
 
 Note that there is only doubt about how much stuff is on the control stack,
@@ -621,13 +623,13 @@ I.e. we need some way to interpose arbitrary code in the path of value
 delivery.
 
 What we do is replace the NLX uses of the continuation with another
-continuation that is received by a MV-Call to %NLX-VALUES in a cleanup block
+continuation that is received by a MV-Call to \%NLX-VALUES in a cleanup block
 that is interposed between the NLX uses and the old continuation's block.  The
-MV-Call uses the original continuation to deliver it's values to.  
+MV-Call uses the original continuation to deliver its values to.  
 
 [Actually, it's not really important that this be an MV-Call, since it has to
 be special-cased by LTN anyway.  Or maybe we would want it to be an MV call.
-If did normal LTN analysis of an MV call, it would force the returned values
+If we did normal LTN analysis of an MV call, it would force the returned values
 into the unknown values convention, which is probably pretty convenient for use
 in NLX.
 
@@ -639,11 +641,13 @@ since THROW will use truly unknown values.]
 
 On entry to a dynamic extent that has non-local-exists into it (always at an
 ENTRY node), we take a complete snapshot of the dynamic state:
+\begin{verbatim}
     the top pointers for all stacks
     current Catch and Unwind-Protect
     current special binding (binding stack pointer in shallow binding)
+\end{verbatim}
 
 We insert code at the re-entry point which restores the saved dynamic state.
-All TNs live at a NLX EP are forced onto the stack, so we don't have to restore
+All TNs live at an NLX EP are forced onto the stack, so we don't have to restore
 them, and we don't have to worry about getting them saved.
 
diff --git a/docs/internals/object.tex b/docs/internals/object.tex
index abaefa016..043cabd18 100644
--- a/docs/internals/object.tex
+++ b/docs/internals/object.tex
@@ -592,14 +592,14 @@ The following are detailed slot descriptions:
    \item[LRA header word:]
       The immediate header-word data is the word offset from the enclosing code
       data-block's header-word to this word.  This allows GC and the debugger
-      to easily recover the code data-block from a LRA.  The code at the
+      to easily recover the code data-block from an LRA.  The code at the
       return point restores the current code pointer using a subtract immediate
       of the offset, which is known at compile time.
 \vspace{1ex}
    \item[Function entry point header-word:]
       The immediate header-word data is the word offset from the enclosing code
       data-block's header-word to this word.  This is the same as for the
-      retrun-PC header-word.
+      return-PC header-word.
    \item[Self-pointer back to header-word:]
       In a non-closure function, this self-pointer to the previous header-word
       allows the call sequence to always indirect through the second word in a
@@ -672,7 +672,7 @@ data-block.
 An advantage of using a single data-block to represent both the descriptor and
 non-descriptor parts of a function is that both can be represented by a
 single pointer.  This reduces the number of memory accesses that have to be
-done in a full call.  For example, since the constant pool is implicit in a
+done in a full call.  For example, since the constant pool is implicit in an
 LRA, a call need only save the LRA, rather than saving both the
 return PC and the constant pool.
 
diff --git a/docs/internals/retargeting.tex b/docs/internals/retargeting.tex
index 530bd04dd..e263c328a 100644
--- a/docs/internals/retargeting.tex
+++ b/docs/internals/retargeting.tex
@@ -100,7 +100,7 @@ Primitive types are used for two things:
 	since we might want to allow multiple ptypes.  This could be handled
 	by allowing "union primitive types", or by allowing multiple primitive
 	types to be specified (only in the operand restriction.)  The latter
-	would be long the lines of other more flexible VOP operand restriction
+	would be along the lines of other more flexible VOP operand restriction
 	mechanisms, (constant, etc.)
 
 
@@ -110,7 +110,7 @@ Ensure that load/save-operand never need to do representation conversion.
 The PRIMITIVE-TYPE more/coerce info would be moved into the SC.  This could
 perhaps go along with flushing the TN-COSTS.  We would annotate the TN with
 best SC, which implies the representation (boxed or unboxed).  We would still
-need represent the legal SCs for restricted TNs somehow, and also would have to
+need to represent the legal SCs for restricted TNs somehow, and also would have to
 come up with some other way for pack to keep track of which SCs we have already
 tried.
 
@@ -133,7 +133,7 @@ alternate.
 I guess a packed SC could also have immediate SCs as alternate SCs, and
 constant loading functions could be associated with SCs using this mechanism.
 
-So given a TN packed in SC X and a SC restriction for Y and Z, how do we know
+So given a TN packed in SC X and an SC restriction for Y and Z, how do we know
 which load function to call?  There would be ambiguity if X was an alternate
 for both Y and Z and they specified different load functions.  This seems
 unlikely to arise in practice, though, so we could just detect the ambiguity
@@ -153,11 +153,13 @@ relativized by the environment that the TN is allocated in.  Packing conflict
 information is kept in the storage base, but non-packed storage resources such
 as closure environments also have storage bases.
 Some storage bases:
+\begin{verbatim}
     General purpose registers
     Floating point registers
     Boxed (control) stack environment
     Unboxed (number) stack environment
     Closure environment
+\end{verbatim}
 
 A storage class is a potentially arbitrary set of the elements in a storage
 base.  Although conceptually there may be a hierarchy of storage classes such
@@ -226,7 +228,7 @@ system:
     since the portable semantics of types has already been dealt with.
 
  -- Different systems will have different specialized number and array types,
-    and different VOPs specialized for these types.  It is easy add this kind
+    and different VOPs specialized for these types.  It is easy to add this kind
     of knowledge without affecting the rest of the compiler.  All you have to
     do is define the VOPs and translations.
 
@@ -417,7 +419,7 @@ VOP writers expect:
       This returns a TN for the NFP if the caller uses the number stack, or
       nil.
    \item[SB-ALLOCATED-SIZE]
-      This returns the size of some storage based used by the currently
+      This returns the size of some storage base used by the currently
       compiling component.
    \item[...]
 \end{Lentry}
@@ -561,7 +563,7 @@ tail-recursive XEP calls.
 
 The unknown-values return convention has variants: single value and variable
 values.  We make this distinction to optimize the important case of a returner
-whose knows exactly one value is being returned.  Note that it is possible to
+who knows exactly one value is being returned.  Note that it is possible to
 return a single value using the variable-values convention, but it is less
 efficient.
 
@@ -689,7 +691,7 @@ Register usage at the time of the return for single value return, which
 goes with the unknown-values convention the caller used.
 
 A0
-   The holds the value.
+   This holds the value.
 
 CODE
    This holds the lisp-return-address at which the system continues executing.
@@ -875,7 +877,7 @@ then it just goes along with the "want one value, got it" case.
 If the returnee wants multiple values, and there's a shortage of return
 values, there are two cases to handle.  One, if the returnee wants fewer
 values than there are return registers, and we start at PC+N, then it fills
-in return registers A1..A<desired values necessary>; if we start at PC,
+in return registers \verb|A1..A<desired values necessary>|; if we start at PC,
 then the returnee is fine since the returning conventions have filled in
 the unused return registers with nil, but the returnee must adjust the
 stack pointer to dump possible stack return values (move OCFP to CSP).
@@ -897,7 +899,7 @@ This also restores CODE from LRA by subtracting an assemble-time constant.
 
 RECEIVE-UKNOWN-VALUES
 (I want whatever I get.)
-We want these at the end of our frame.  When the returnee starts starts at
+We want these at the end of our frame.  When the returnee starts at
 PC, it moves the return value registers to OCFP..OCFP[An] ignoring where
 the end of the stack is and whether all the return value registers had
 values.  The returner left room on the stack before the stack return values
@@ -1000,7 +1002,7 @@ When practical, ICR transforms should be used instead of VMR generators, since
 transforms are more portable and less error-prone.  Note that the Lisp code
 need not be implementation independent: it may contain all sorts of
 sub-primitives and similar stuff.  Generally a function should be implemented
-using a transform instead of an VMR translator unless it cannot be implemented
+using a transform instead of a VMR translator unless it cannot be implemented
 as a transform due to being totally evil or it is just as easy to implement as
 a translator because it is so simple.
 
@@ -1024,13 +1026,13 @@ i.e. what FPA is present is to do this using primitive types.  Note that the
 Primitive-Type function is VM supplied, and can look at any appropriate
 hardware configuration switches.  Short-Float can become 6881-Short-Float,
 AFPA-Short-Float, etc.  There would be separate SBs and SCs for the registers
-of each kind of FP hardware, with the each hardware-specific primitive type
+of each kind of FP hardware, with each hardware-specific primitive type
 using the appropriate float register SC.  Then the hardware specific templates
 would provide AFPA-Short-Float as the argument type restriction.
 
 Primitive type changes:
 
-The primitive-type structure is given a new %Type slot, which is the CType
+The primitive-type structure is given a new \%Type slot, which is the CType
 structure that is equivalent to this type.  There is also a Guard slot, with,
 if true is a function that control whether this primitive type is allowed (due
 to hardware configuration, etc.)  
@@ -1041,7 +1043,7 @@ an expression evaluated in the null environment that controls whether this type
 applies (default to none, i.e. constant T).
 
 The Primitive-Type-Type function returns the Lisp CType corresponding to a
-primitive type.  This is the %Type unless there is a guard that returns false,
+primitive type.  This is the \%Type unless there is a guard that returns false,
 in which case it is the empty type (i.e. NIL).
 
 [But this doesn't do what we want it to do, since we will compute the
@@ -1067,7 +1069,7 @@ I guess the guard should be associated with the template rather than the
 primitive type.  This would allow LTN and friends to easily tell whether a
 template applies in this configuration.  It is also probably more natural for
 some sorts of things: with some hardware variants, it may be that the SBs and
-representations (SCs) are really the same, but there some different allowed
+representations (SCs) are really the same, but there are some different allowed
 operations.  In this case, we could easily conditionalize VOPs without the
 increased complexity due to bogus SCs.  If there are different storage
 resources, then we would conditionalize Primitive-Type as well.
-- 
GitLab