Skip to content
GitLab
Menu
Projects
Groups
Snippets
Loading...
Help
Help
Support
Community forum
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Menu
Open sidebar
asdf
ilc2010
Commits
d72b4379
Commit
d72b4379
authored
Aug 31, 2010
by
Robert P. Goldman
Browse files
Last (I hope) proofreading pass over this section. I think it's good to go.
parent
974bf814
Changes
1
Hide whitespace changes
Inline
Side-by-side
upgrade.tex
View file @
d72b4379
...
...
@@ -10,7 +10,7 @@ the development, distribution and usage of new versions of {\ASDF}.
Only if we guarantee that
{
\ASDF
}
can be upgraded if needed
can users rely on new features and bug fixes of
{
\ASDF
}
.
Previouly, there was no
way to universal
way to load and configure
{
\ASDF
}
Previou
s
ly, there was no
portable
way to load and configure
{
\ASDF
}
unless it had been pre-loaded with your Lisp image
(as by
\texttt
{
common-lisp-controller
}
under Debian),
and there was no way to upgrade a pre-loaded
{
\ASDF
}
with a new version.
...
...
@@ -42,7 +42,7 @@ Unlike other build systems, such as {\make},
\moneyquote
{{
\ASDF
}
is an ``in-image'' build system
managing systems that are compiled and loaded in the current
{
\CL
}
image
}
.
Other languages typically rely
Other languages
' build tools
typically rely
on some external operating system provided shell
to build software that is loaded into virtual machines
(
{
\em
processes
}
in Unix parlance) distinct from the current one.
...
...
@@ -51,7 +51,7 @@ is typically kept in the filesystem,
and incompatible changes in interfaces or internals of the build system
are resolved simply by starting a new virtual machine.
{
\ASDF
}
does not
rely on the starting of
separate processes for compilation.
{
\ASDF
}
does not
start
separate processes for compilation.
We believe that this there are a number of reasons for this design decision:
\begin{itemize}
\item
{
\CL
}
implementations were running on
...
...
@@ -91,8 +91,11 @@ or upgrade an existing {\ASDF} installation to the current code
(if a previous version already exists).
In addition, the code for an
{
\ASDF
}
version must recognize
the special case when the very same version is already loaded
so as to make such reloads idempotent
and to avoid unnecessarily breaking things.
so as to make such reloads idempotent.
% I cut this so we wouldn't have to explain how not being idempotent might break
% things. I think (hope) it's self-evident that idempotency here is A Good
% Thing. [2010/08/31:rpg]
% and to avoid unnecessarily breaking things.
It does this by relying on a simple version identification string,
to be bumped up at every modification of
{
\ASDF
}
.
...
...
@@ -101,18 +104,18 @@ to be bumped up at every modification of {\ASDF}.
The semantics of redefining or overriding a function
is not fully specified by the
{
\CL
}
standard.
The many implementations at the time may have had explicitly different semantics,
The many implementations at the time
of standardization
may have had explicitly different semantics,
the semantic difficulties may have been overlooked,
implementers may have called for underspecification
as leaving them more room for optimization,
or it may have otherwise not been considered appropriate
for the committee to standardize
on w
hat wasn't widely accepted
anyway
.
In writing the code that
allows
to upgrade
{
\ASDF
}
,
for the committee to standardize
a practice t
hat wasn't widely accepted.
In writing the code that
makes it possible
to upgrade
{
\ASDF
}
,
we encountered two complementary difficulties
w
ith
rebinding the functional value of symbols.
w
hen
rebinding the functional value of symbols.
The first difficulty arises
from incompatibilities between
the
new and old function definitions
from incompatibilities between new and old function definitions
bound to a same symbol
when new functions are dynamically called by an old client,
with data following the old convention.
...
...
@@ -145,17 +148,17 @@ as long as it behaves in a semantically equivalent way.\footnote{
The two above difficulties are inherent in redefining functions
and are not specific to either
{
\CL
}
or
{
\ASDF
}
.
However, these difficulties are particularly relevant in the case of
{
\ASDF
}
,
tha
t drives compilation and loading of Lisp code
because i
t drives compilation and loading of Lisp code
possibly including new versions of
{
\ASDF
}
itself.
{
\ASDF
}
's code is therefore likely to be in the continuation
of its own function redefinitions,
where the old code will for a short while
be
the
client to the new code.
be
a
client to the new code.
Moreover, these difficulties are compounded by the fact that
the
{
\CL
}
standard~
\cite
[section 3.2.2.3]
{
ANSI:1996:ANSa
}
leaves it un
specif
ied
the
{
\CL
}
standard~
\cite
[section 3.2.2.3]
{
ANSI:1996:ANSa
}
does not
specif
y
whether any particular call will be dynamic or static,
unless the function was explicitly declared
\lisp
{
notinline
}
,
at
which
point
it should always be dynamic.
in
which
case
it should always be dynamic.
In practice, implementations may legitimately
inline function bodies,
cache effective methods for generic function calls,
...
...
@@ -168,7 +171,7 @@ and the evaluation context.
% \ftor{
% Or do some SB-PCL optimization sometimes trigger
% \emph{illegimate} static method cache semantics for notinline gfs?
% \emph{illegi
ti
mate} static method cache semantics for notinline gfs?
% --- TODO: write checks for method caching of notinline gfs,
% and collect results from several implementations with cl-launch.}
...
...
@@ -179,7 +182,7 @@ and make way for a new definition.
Indeed, in the simple case where a function is
not referenced in the continuation of the current compile or load,
and not exported to code from other files,
all references to it will be overridden by newly loaded code
all references to it will be overridden by newly loaded code
.
In this case, it is sufficient to
{
\fmakunbound
}
the function symbol
(and possibly re-declaim its type) before redefining it
with an incompatible signature.
...
...
@@ -195,7 +198,7 @@ that haven't been standardized.
We cannot work around limitations of MOP standardizations
by using a portability layer such as
{
\CLOSERMOP
}
~
\cite
{
costanza:closer
}
,
lest by doing so we create a circular dependency
between the
se two pieces of software
.
between the
portability layer (loaded using
\ASDF
{}
) and
\ASDF
{}
itself
.
\subsection
{
Shadowing a symbol
}
...
...
@@ -230,7 +233,7 @@ syntactic conventions,
such as
\lisp
{
*ear-muffs*
}
for special variables
and something similar for
\lisp
{
+constants+
}
.
There should
\emph
{
never
}
be a need to turn the
\lisp
{
*ear-muffs*
}
variable into
something that is lexically scoped, or change
\lisp
{
+constants+
}
at all.
something that is lexically scoped, or
to
change
\lisp
{
+constants+
}
at all.
The main downside of shadowing as a redefinition mechanism is that it requires
that all clients be reloaded and possibly recompiled
...
...
@@ -263,7 +266,7 @@ to function properly, they must be linked against
the symbols from the new
{
\ASDF
}
.
Ideally, whether we rebind or shadow would be a matter of
the
tens
ion between intension and extension:
the
distinct
ion between intension and extension:
which symbols we consider intensional fixed entry points
that denote some ``same'' higher meaning when implementation changes underneath,
and which symbols denote extensional constant code values,
...
...
@@ -301,10 +304,10 @@ and require client packages to be reloaded to link to the new package object.
{
\ASDFii
}
takes care to define the
\lisp
{
ASDF
}
package if it doesn't exist,
redefine it properly if it exists, etc.
{
\ASDFii
}
reuses existing packages and symbols
whenever possible
to not
invalidate previously interned client code, etc.
whenever possible
, so as not to
invalidate previously interned client code, etc.
This package wrangling was difficult to get right, and
once again, we have to take into account the eager linking done by ECL and GCL.
One reason
why
we could make this package wrangling work
One reason we could make this package wrangling work
is that we do not need to blindly handle the general case
of upgrading arbitrary package definitions to arbitrary new ones.
All we needed to do was to upgrade previous versions of our own packages.
...
...
@@ -323,7 +326,7 @@ Classes can be redefined,
slots can be added to them, removed from them, or modified,
and all instances will be automatically updated before their next use
to fit the new definition.
The
{
\longCLOS
}
(
{
\CLOS
}
)
\cite
{
bobrow
_
etal88
}
)
The
{
\longCLOS
}
(
{
\CLOS
}
)
\cite
{
bobrow
_
etal88
}
allows users to control this instance update programmatically
by defining methods on
{
\uifrc
}
.
We rely on this functionality in
{
\ASDFii
}
...
...
@@ -347,14 +350,16 @@ and therefore without the upgrade being properly run.
Defining the method before the class in the source code
may cause a warning the first time around when the class isn't defined yet.
Inserting an introspective check for class existence
may cause the method definition
to
not be statically compiled
may cause the method definition not
to
be statically compiled
and emit a warning on some implementations.
Protecting the method definition with delayed evaluation (as we finally did)
hushes the warning
but causes slightly inefficient runtime compilation on some implementations;
however it doesn't cause any significant user-visible pause,
since the user is compiling
{
\ASDF
}
and presumably lots of other code with it,
of which this little delayed compilation is but a tiny fraction.
hushes the warning.
Unfortunately, it also
causes slightly inefficient runtime compilation on some implementations.
Nevertheless,
it doesn't cause any significant user-visible pause,
since the user is compiling
{
\ASDF
}
and (presumably lots of other code with it);
the slight added delay is not perceptible.
The
{
\CL
}
protocol for class redefinition is relatively well-designed
and quite effectively handles the difficult problem of schema upgrade
...
...
@@ -369,7 +374,7 @@ independently of whether the code is an initial definition or an upgrade.
It is to the credit of
{
\CL
}
that dynamic code upgrade is possible at all;
it is not possible in most programming languages.
However, it is possible to support
it
much better.
However, it is possible to support
dynamic code upgrade
much better.
For instance, Erlang solves the issue of dynamic code upgrade
by providing syntactic distinction
between the two semantically different kinds of calls:
...
...
@@ -399,7 +404,7 @@ shadowing the usual reader and evaluator to replace them with something
that provides well-defined semantics for hot upgrade,
assuming all code is (re)compiled on top of this implementation
rather than directly with the underlying implementation.
This, however, would be a large challenging task and not obviously worth the cost.
This, however, would be a large
,
challenging task and not obviously worth the cost.
Furthermore,
if one were to design and implement
what amounts to a new language on top of
{
\CL
}
,
...
...
@@ -409,11 +414,11 @@ within a same Lisp image (as is common nowadays),
some model of atomicity or PCLSRing
\cite
{
PCLSRing
}
would be required,
which also goes beyond the current
{
\CL
}
language specification.
Lacking such a better
specified Lisp, possibly implemented atop
{
\CL
}
,
Lacking such a better
-
specified Lisp, possibly implemented atop
{
\CL
}
,
there are ways to work around these limitations;
but not only are they are quite unidiomatic,
they require manual management
(since by assumption we rejected implementing them on top of
{
\CL
}
).
they require manual management
.
%
(since by assumption we rejected implementing them on top of {\CL}).
For instance, we could use some kind of symbol versioning:
use completely different symbols any time we would previously redefine things,
mark old symbols as obsolete and never reuse them.
...
...
@@ -427,12 +432,13 @@ would itself need to be renamed with a new version
since its contents have changed to use new function names.
In a limited way, that is what uninterning symbols does for you,
and what renaming away packages would do, etc.
And t
his technique similarly requires new clients to be recompiled
T
his technique similarly requires new clients to be recompiled
any time any code is modified.
This latter approach is semantically safe and technically simple,
but we didn't adopt it so far, because of its social issue:
it requires us to either keep supporting old interfaces,
but we didn't adopt it, because of its social implications.
This approach
requires us to either keep supporting old interfaces,
or gratuitously break old programs, all the more gratuitously
when the incompatibility with previous interface lies in
``extensions'' that were conceptually broken and remained (mostly?) unused.
...
...
@@ -454,9 +460,9 @@ The good news is that it is possible to write hot upgradable code in {\CL}
in a reasonably portable way, whereas dynamic code upgrade is not even possible
in most programming languages.
The bad news is that hot upgrade remains quite tricky,
and it imposes limitations on the code to be
upgrade
d
,
especially when trying to do it portably
.
In order to write hot
-
upgrade code,
especially
\emph
{
portable
}
hot
upgrade,
and it imposes limitations on the code to be upgraded
.
In order to write hot
upgrade code,
you have to use application-specific knowledge to determine what is safe and
what is not.
Furthermore,
...
...
@@ -466,7 +472,7 @@ it can even damage the operation of a single-threaded environment.
\moneyquote
{{
\CL
}
support for hot upgrade of code may exist
but is anything but seamless.
}
Happily, programmers only need
to deal with hot
-
upgrade as an issue for
\emph
{
their own
}
programs, and so
to deal with hot
upgrade as an issue for
\emph
{
their own
}
programs, and so
they have the required, application-specific knowledge available;
so at least the problem is socially solvable, if technically hard.
...
...
@@ -474,8 +480,8 @@ In the end,
\moneyquote
{
the general problem with
{
\CL
}
is that
its semantics are defined in terms of irreversible side-effects
to global data structures in the current image.
}
T
his complicate
s not only
hot upgrade but also
make semantic analysis, separate compilation, dependency management,
Not only does t
his complicate hot upgrade but also
make
s
semantic analysis, separate compilation, dependency management,
and a lot of things much harder than they should be.
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
.
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment