
Yukari Hafner — The State of MacOS Support
@2023-11-27 14:04 · 30 hours agoI've been writing libraries for Common Lisp for over a decade now (lord almighty), and for most of that time I've tried to ensure that the libraries would, in the very least, work on all three major operating systems: Windows, Linux, and MacOS.
Usually doing so isn't hard, as I can rely on the implementation and the language standard, but especially for libraries that deal with foreign code or operating system interfaces, a bit more work is needed. For the longest time I went the extra mile of providing that support myself, despite not being a MacOS user, and despite vehemently disapproving of Apple as a company and their treatment of users and developers.
About two years ago, I stopped. I had had enough of all the extra work the platform put on me, for zero personal gain. Especially I had had enough of all the extra work it continued to put on me for things that had already been working before. The amount of work only ever increased, with barely any thanks or compensation for this work. Apple's war against its own users and developers only ever increased as well.
I cannot in good conscience support MacOS, but I understand that a lot of people are stuck on that platform for one reason or another, and I do not wish to punish them, either. However, I lack a working MacOS setup these days, especially for the newer M1/2/3 systems. And so I appeal to you, MacOS users: if you have any interest in any of the following libraries, please contribute patches.
Requiring only C library builds:
make -C glfw/lib/
make -C cl-vorbis/
make -C cl-opus/lib/
cl-mixed (C lib: https://github.com/Shirakumo/libmixed )
make -C libmixed/
make -C cl-fbx/
make -C cl-turbojpeg/lib/
make -C cl-theora/lib/
The C library projects should not be much work to fix, with the binaries for AMD64/ARM64 they should pretty much be done. A couple require Lisp patches, though:
file-notify
The darwin implementation is buggy and I don't know why. The documentation for MacOS sucks.machine-state
Needs testing of the posix APIs and possibly darwin-specific fixupsKandria
No idea how much needs doing here, probably a bunch of backend specific things to test and implement in Trial.
If you decide to contribute, I'm sure a lot of your fellow MacOS users would be very thankful!
And if you like what I do in general, please consider supporting my work on Patreon!
Joe Marshall — GitHub Co-pilot Review
@2023-11-24 15:50 · 4 days agoI recently tried out GitHub CoPilot. It is a system that uses generative AI to help you write code.
The tool interfaces to your IDE — I used VSCode — and acts as an autocomplete on steroids … or acid. Suggested comments and code appear as you move the cursor and you can often choose from a couple of different completions. The way to get it to write code was to simply document what you wanted it to write in a comment. (There is a chat interface where you can give it more directions, but I did not play with that.)
I decided to give it my standard interview question: write a simple TicTacToe class, include a method to detect a winner. The tool spit out a method that checked an array for three in a row horizontally, vertically, and along the two diagonals. Almost correct. While it would detect three ‘X’s or ‘O’s, it also would detect three nulls in a row and declare null the winner.
I went into the class definition and simply typed a comment
character. It suggested an __init__
method. It
decided on a board representation of a 1-dimensional array of 9
characters, ‘X’ or ‘O’ (or null), and a character that determined
whose turn it was. Simply by moving the cursor down I was able to
get it to suggest methods to return the board array, return the
current turn, list the valid moves, and make a move. The suggested
code was straightforward and didn’t have bugs.
I then decided to try it out on something more realistic. I have a linear fractional transform library I wrote in Common Lisp and I tried porting it to Python. Co-pilot made numerous suggestions as I was porting, to various degrees of success. It was able to complete the equations for a 2x2 matrix multiply, but it got hopelessly confused on higher order matrices. For the print method of a linear fractional transform, it produced many lines of plausible looking code. Unfortunately, the code has to be better than “plausible looking” in order to run.
As a completion tool, co-pilot muddled its way along. Occasionally, it would get a completion impressively right, but just as frequently — or more often — it would get the completion wrong, either grossly or subtly. It is the latter that made me nervous. Co-pilot would produce code that looked plausible, but it required a careful reading to determine if it was correct. It would be all too easy to be careless and accept buggy code.
The code Co-Pilot produced was serviceable and pedestrian, but often not what I would have written. I consider myself a “mostly functional” programmer. I use mutation sparingly, and prefer to code by specifying mappings and transformations rather than sequential steps. Co-pilot, drawing from a large amount of code written by a variety of authors, seems to prefer to program sequentially and imperatively. This isn’t surprising, but it isn’t helpful, either.
Co-pilot is not going to put any programmers out of work. It simply isn’t anywhere near good enough. It doesn’t understand what you are attempting to accomplish with your program, it just pattern matches against other code. A fair amount of code is full of patterns and the pattern matching does a fair job. But exceptions are the norm, and Co-pilot won’t handle edge cases unless the edge case is extremely common.
I found myself accepting Co-pilot’s suggestions on occasion. Often I’d accept an obviously wrong suggestion because it was close enough and the editing seemed less. But I always had to guard against code that seemed plausible but was not correct. I found that I spent a lot of time reading and considering the code suggestions. Any time savings from generating these suggestions was used up in vetting the suggestions.
One danger of Co-pilot is using it as a coding standard. It produces “lowest common denominator” code — code that an undergraduate that hadn’t completed the course might produce. For those of us that think the current standard of coding is woefully inadequate, Co-pilot just reinforces this style of coding.
Co-pilot is kind of fun to use, but I don’t think it helps me be more productive. It is a bit quicker than looking things up on stackoverflow, but its results have less context. You wouldn’t go to stackoverflow and just copy code blindly. Co-pilot isn’t quite that — it will at least rename the variables — but it produces code that is more likely buggy than not.
Nicolas Martyanoff — Interactive Common Lisp development
@2023-11-19 18:00 · 9 days agoCommon Lisp programming is often presented as “interactive”. In most languages, modifications to your program are applied by recompiling it and restarting it. In contrast, Common Lisp lets you incrementally modify your program while it is running.
While this approach is convenient, especially for exploratory programming, it also means that the state of your program during execution does not always reflect the source code. You do not just define new constructs: you look them up, inspect them, modify them or delete them. I had to learn a lot of subtleties the hard way. This article is a compendium of information related to the interactive nature of Common Lisp.
Variables
In Common Lisp variables are identified by symbols. Evaluating (SETQ A 42)
creates or updates a variable with the integer 42
as value, and associates
it to the A
symbol. After the call to SETQ
, (BOUNDP 'A)
will return T
and (SYMBOL-VALUE 'A)
will return 42
.
You do not delete a variable: instead, you remove the association between the
symbol and the variable. You do so with MAKUNBOUND
. Following the previous
example, (MAKUNBOUND 'A)
will remove the association between the A
symbol
and the variable. And (BOUNDP 'A)
returns NIL
as expected. As for
(SYMBOL-VALUE 'A)
, it now signals an UNBOUND-VARIABLE
error as mandated by
the standard.
What about DEFVAR
and DEFPARAMETER
? They are also used to declare
variables (globally defined ones), associating them with symbols. Both define
“special” variables (i.e. variables for which all bindings are dynamic; see
CLtL2 9.2). The difference is that the initial value passed to DEFVAR
is not
evaluated if it already has a value. MAKUNBOUND
will work on variables
declared with DEFVAR
or DEFPARAMETER
as expected.
DEFCONSTANT
is a bit more complicated. CLtL21 5.3.2 states that “once a
name has been declared by defconstant to be constant, any further assignment
to or binding of that special variable is an error”, but does not clearly
define whether MAKUNBOUND
should or should not be able to be used on
constants. However, CLtL2 5.3.2 also states that “defconstant [...] does assert
that the value of the variable name is fixed and does license the compiler to
build assumptions about the value into programs being compiled”. If the
compiler is allowed to rely on the value associated with the variable name, it
would make sense not to allow the deletion of the binding. Thus it is
recommended to only use constants for values that are guaranteed to never
change, e.g. mathematical constants. Most of the time you want DEFPARAMETER
.
Note that MAKUNBOUND
does not apply to lexical variables.
Functions
Common Lisp is a Lisp-2, meaning that variables and functions are part of two separate namespaces. Despite this clear separation, functions behave similarly to variables.
Using DEFUN
will either create or update the global function associated with
a symbol. SYMBOL-FUNCTION
returns the globally defined function associated
with a symbol, and FMAKUNBOUND
deletes this association.
Let us point out a common mistake when referencing functions: (QUOTE F)
(abbreviated as 'F
) yields a symbol while (FUNCTION F)
(abbreviated as
#'F
) yields a function. The function argument of FUNCALL
and APPLY
can
be either a symbol or a function (see CLtL2 7.3) It has two consequences:
First, one can write a function referencing F
as (QUOTE F)
with the
expectation that F
will later be bound to a function. The following function
definition is perfectly valid even though F
has not been defined yet:
(defun foo (a b)
(funcall 'f a b))
Second, redefining the F
function will update its association (or binding)
to the F
symbol, but the previous function will still be available if it has
been referenced somewhere before the update. For example:
(setf (symbol-function 'foo) #'1+)
(let ((old-foo #'foo))
(setf (symbol-function 'foo) #'1-)
(funcall old-foo 42))
What about macros? Since macros are a specific kind of functions (CLtL2 5.1.4
“a macro is essentially a function from forms to forms”), it is not surprising
that they share the same namespace and can be manipulated in the same way as
functions with FBOUNDP
, SYMBOL-FUNCTION
and FMAKUNBOUND
.
Symbols and packages
While functions and variables are familiar concepts to developers, Common Lisp symbols and packages are a bit more peculiar.
A symbol is interned when it is part of a package. The most explicit way to
create an interned symbol is to use INTERN
, e.g. (INTERN "FOO")
. INTERN
interns the symbol in the current package by default, but one can pass a
package as second argument. After that, (FIND-SYMBOL "FOO")
will return our
interned symbol as expected.
More surprisingly, the reader automatically interns symbols. You can test it
by evaluating (READ-FROM-STRING "BAR")
. After evaluation, BAR
is a symbol
interned in the current package. This also means that it is very easy to
pollute a package with symbols in ways you did not necessarily expect. To
clean up, simply use UNINTERN
. Remember to refer to the right symbol: to
remove the symbol BAR
from the package FOO
, use (UNINTERN 'FOO::BAR "BAR")
.
A symbol is either internal or external. EXPORT
will make a symbol external
to its package while UNEXPORT
will make it internal. As for UNINTERN
,
confusion usually arises around which symbol is affected. (UNEXPORT 'FOO:BAR "FOO")
correctly refers to the external symbol in the FOO
package and makes
it internal again. (UNEXPORT 'BAR "FOO")
will signal an error since the
BAR
symbol is not part of the FOO
package (unless of course the current
package happens to be FOO
).
Packages themselves can be created with MAKE-PACKAGE
and destroyed with
DELETE-PACKAGE
. Developers are usually more familiar with DEFPACKAGE
, a
macro allowing the creation of a package and its configuration (package use
list, imported and exported symbols, etc.) in a declarative way. A surprising
and frustrating behavior is that evaluating a DEFPACKAGE
form for a package
that already exists will result in undefined behavior if the new declaration
“is not consistent” (CLtL2 11.7) with the current state of the package. As an
example, adding symbols to the export list is perfectly fine. Removing one
will result in undefined behavior (usually an error) due to the inconsistency
of the export list. Fortunately, Common Lisp offers all the necessary
functions to manipulate packages and their symbols: use them!
Classes
The Common Lisp standard includes CLOS, the Common Lisp Object System. Unsurprisingly it provides multiple ways to interact with classes and objects dynamically.
As variables or functions, classes are identified by symbols and FIND-CLASS
returns the class associated with a symbol. Class names are part of a separate
namespace shared with structures and types.
The DEFCLASS
macro is the only way to define or redefine a class. Redefining
a class means that instances created afterward with MAKE-INSTANCE
will use
the new definition. Existing instances are updated: newly added slots are
added (either unbound or using the value associated with :INITFORM
) and
slots that are not defined anymore are deleted.
UPDATE-INSTANCE-FOR-REDEFINED-CLASS
is particularly interesting: developers
can define methods for this generic function in order to control how instances
are updated when their class is redefined.
Defining classes may imply implicitly defining methods: the :ACCESSOR
,
:READER
and :WRITER
slot keyword arguments will lead to the creation of
generic functions. When a class is redefined, methods associated with slots
that have been removed will live on.
A limitation of CLOS is that classes cannot be deleted. FIND-CLASS
can be
used as a place, and (SETF (FIND-CLASS 'FOO) NIL)
will remove the
association between the FOO
symbol and the class, but the class itself and
its instances will not disappear. While this limitation may seem strange, ask
yourself how an implementation should handle instances of a class that has
been deleted.
The class of an instance can be changed with CHANGE-CLASS
: slots that exist
in the new class will be conserved while those that do not are deleted. New
slots are either unbound or set to the value associated with :INITFORM
in
the new class. In a way similar to UPDATE-INSTANCE-FOR-REDEFINED-CLASS
,
UPDATE-INSTANCE-FOR-DIFFERENT-CLASS
lets developers control precisely the
process.
Generics and methods
Generics are functions which can be specialized based on the class (and not type as one could expect) of their arguments and which can have a method combination type.
Generics can be created explicitly with DEFGENERIC
or implicitly when
DEFMETHOD
is called and the list of parameter specializers and method
combination does not match any existing generic function. Since generics are
functions, FBOUNDP
, SYMBOL-FUNCTION
and FMAKUNBOUND
will work as
expected.
Methods themselves are either defined as part of the DEFGENERIC
call or
separately with DEFMETHOD
. Discovering the different methods associated with
a generic function is a bit more complicated. There is no standard way to list
the methods associated with a generic, but it is at least possible to look up
a method with FIND-METHOD
. Do remember to pass a function (and not a symbol)
as the generic, and to pass classes (and not symbols naming classes) in the
list of specializers.
Redefinition is not as obvious as for non-generic functions. When redefining a
generic with DEFGENERIC
all methods defined as part of the previous
DEFGENERIC
form are removed and methods defined in the redefinition are
added. However, methods defined separately with DEFMETHOD
are not affected.
For example, in the following code, the second call to DEFGENERIC
will
replace the two methods specialized on INTEGER
and FLOAT
respectively by a
single one specialized on a STREAM
, but the method specialized on STRING
will remain unaffected.
(defgeneric foo (a)
(:method ((a integer))
(format nil "~A is an integer" a))
(:method ((a float))
(format nil "~A is a float" a)))
(defmethod foo ((a string))
(format nil "~S is a string" a))
(defgeneric foo (a)
(:method ((a stream))
(format nil "~A is a stream" a)))
Note that trying to redefine a generic with a different parameter lambda list will cause the removal of all previously defined methods since none of them can match the new parameters.
Removing a method will require you to find it first using FIND-METHOD
and
then use REMOVE-METHOD
. With the previous example, removing the method
specialized on a STRING
argument is done with:
(remove-method #'foo (find-method #'foo nil (list (find-class 'string)) nil))
Working with methods is not always easy, and two errors are very common.
First, remember that changing the combinator in a DEFMETHOD
will define a
new method. If you realize that your :AFTER
method should use :AROUND
and
reevaluate the DEFMETHOD
form, remember to delete the method with the
:AFTER
combinator or you will end up with two methods being called.
Second, when defining a method for a generic from another package, remember to
correctly refer to the generic. If you want to define a method on the BAR
generic from package FOO
, use (DEFMETHOD FOO:BAR (...) ...)
and not
(DEFMETHOD BAR (...) ...)
. In the latter case, you will define a new BAR
generic in the current package.
Meta Object Protocol
While CLOS is already quite powerful, various interactions are impossible. One cannot create classes or methods programmatically, introspect classes or instances for example to list their slots or obtain all their superclasses, or list all the methods associated with a generic function.
In addition of an example of a CLOS implementation, The Art of the Metaobject Protocol2 defines multiple extensions to CLOS including metaclasses, metaobjects, dynamic class and generic creation, class introspection and much more.
Most Common Lisp implementations implement at least part of these extensions, usually abbreviated as “MOP”, for “MetaObject Protocol”. The well-known closer-mop system can be used as a compatibility layer for multiple implementations.
Structures
Structures are record constructs defined with DEFSTRUCT
. At a glance they
may seem very similar to classes, but they have a fundamental limitation:
the results of redefining a structure are undefined (CLtL2 19.2).
While this property allows implementations to handle structures in a more efficient way than classes, it makes structures unsuitable for incremental development. As such, they should only be used as a last resort, when a regular class has been proved to be a performance bottleneck.
Conditions
While conditions look very similar to classes the Common Lisp standard does not define them as classes. This is one of the few differences between the standard and CLtL2 which clearly states in 29.3.4 that “Common Lisp condition types are in fact CLOS classes, and condition objects are ordinary CLOS objects”.
This is why one uses DEFINE-CONDITION
instead of DEFCLASS
and
MAKE-CONDITION
instead of MAKE-INSTANCE
. This also means that one should
not use slot-related functions (including the very useful WITH-SLOTS
macro)
with conditions.
In practice, most modern implementations follow CLtL2 and the
CLOS-CONDITIONS:INTEGRATE
X3J13 Cleanup
Issue
and implement conditions as CLOS classes, meaning that conditions can be
manipulated and redefined as any other classes. And the same way as any other
classes, they cannot be deleted.
Types
Types are identified by symbols and are part of the same namespace as classes (which should not be surprising since defining a class automatically defines a type with the same name).
Types are defined with DEFTYPE
, but documentation is surprisingly silent on
the effects of type redefinition. This can lead to interesting situations. On
some implementations (e.g. SBCL and CCL), if a class slot is defined as having
the type FOO
, redefining FOO
will not be taken into account and the type
checking operation (which is not mandated by the standard) will use the
previous definition of the type. Infortunatly Common Lisp does not mandate
any specific behavior on slot type mismatches (CLtL2 28.1.3.2).
Thus developers should not expect any useful effect from redefining types. Restarting the implementation after substantial type changes is probably best.
In the same vein interactions with types are very limited. You cannot find a
type by its symbol or even check whether a type exists or not. Calling
TYPE-OF
on a value will return a type this value satisfies, but the nature
of the type is implementation-dependent (CLtL2 4.9): it could be any
supertype. In other words, TYPE-OF
could absolutly return T
for all values
but NIL
. At least SUBTYPE-P
lets you check whether a type is a subtype of
another type.
Going further
Common Lisp is a complex language with a lot of subtleties, way more than what can be covered in a blog post. The curious reader will probably skip the standard (not because you have to buy it, but because it is a low quality scan of a printed document and jump directly to CLtL2 or the Common Lisp HyperSpec. The Art of the Metaobject Protocol is of course the normative reference for the CLOS extensions usually referred to as “MOP”.
Quicklisp news — October 2023 Quicklisp dist update now available
@2023-10-30 00:46 · 29 days agoNew projects:
- 3d-math — A library implementing the necessary linear algebra math for 2D and 3D computations — zlib
- ansi-test-harness — A testing harness that fetches ansi-test and allows subsets and extrinsic systems — MIT
- babylon — Jürgen Walther's modular, configurable, hybrid knowledge engineering systems framework for Common Lisp, restored from the CMU AI Repository. — MIT
- calm — CALM - Canvas Aided Lisp Magic — GNU General Public License, version 2
- cffi-object — A Common Lisp library that enables fast and convenient interoperation with foreign objects. — Apache-2.0
- cffi-ops — A library that helps write concise CFFI-related code. — Apache-2.0
- cl-brewer — Provides CI settings for cl-brewer. — Unlicense
- cl-jwk — Common Lisp system for decoding public JSON Web Keys (JWK) — BSD 2-Clause
- cl-plus-ssl-osx-fix — A fix for CL+SSL library paths on OSX needed when you have Intel and Arm64 Homebrew installations. Should be loaded before CL+SSL. — Unlicense
- cl-server-manager — Manage port-based servers (e.g., Swank and Hunchentoot) through a unified interface. — MIT
- cl-transducers — Ergonomic, efficient data processing. — LGPL-3.0-only
- cl-transit — Transit library for Common Lisp — MIT
- clog-collection — A set of CLOG Plugins — MIT
- clohost — A client library for the Cohost API — zlib
- deptree — ASDF systems dependency listing and archiving tool for Common Lisp — MIT
- enhanced-unwind-protect — Provides an enhanced UNWIND-PROTECT that makes it easy to detect whether the protected form performed a non-local exit or returned normally. — Unlicense
- file-lock — File lock library on POSIX systems — MIT License
- fmcs — Flavors Meta-Class System (FMCS) for Demonic Metaprogramming in Common Lisp, an alternative to CLOS+MOP, restored from the CMU AI Repository. — MIT
- fuzzy-dates — A library to fuzzily parse date strings — zlib
- glfw — An up-to-date bindings library to the most recent GLFW OpenGL context management library — zlib
- hyperlattices — Generalized Lattice algebraic datatypes, incl., LATTICE, HYPERLATTICE, PROBABILISTIC-LATTICE, and PROBABILISTIC-HYPERLATTICE. — MIT
- lemmy-api — Most recently generated bindings to the lemmy api — GPLv3
- manifolds — Various manifold mesh algorithms — zlib
- mutils — A collection of Common Lisp modules. — MIT
- ptc — Proper Tail Calls for CL — MIT
- type-templates — A library for defining and expanding templated functions — zlib
Updated projects: 3bmd, 3d-matrices, 3d-quaternions, 3d-spaces, 3d-transforms, 3d-vectors, 40ants-asdf-system, 40ants-slynk, action-list, adhoc, adp, alexandria, also-alsa, anypool, april, architecture.builder-protocol, array-operations, array-utils, asdf-flv, async-process, atomics, bdef, bike, binary-structures, binding-arrows, bordeaux-threads, bp, bubble-operator-upwards, cari3s, cephes.cl, cffi, chirp, chlorophyll, chunga, ci, cl+ssl, cl-6502, cl-all, cl-async, cl-atelier, cl-autowrap, cl-bcrypt, cl-bmp, cl-change-case, cl-clon, cl-collider, cl-colors2, cl-confidence, cl-containers, cl-cron, cl-data-structures, cl-dbi, cl-digraph, cl-fast-ecs, cl-fbx, cl-flac, cl-forms, cl-gamepad, cl-gists, cl-glib, cl-gltf, cl-gobject-introspection, cl-gobject-introspection-wrapper, cl-gopher, cl-gpio, cl-gserver, cl-hash-util, cl-html-parse, cl-i18n, cl-isaac, cl-jingle, cl-jsonl, cl-k8055, cl-kanren, cl-ktx, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-markless, cl-marshal, cl-messagepack, cl-migratum, cl-mixed, cl-mlep, cl-modio, cl-moneris, cl-monitors, cl-mount-info, cl-mpg123, cl-naive-store, cl-opus, cl-out123, cl-patterns, cl-pdf, cl-permutation, cl-project, cl-protobufs, cl-pslib, cl-pslib-barcode, cl-rashell, cl-readline, cl-rfc4251, cl-sdl2, cl-sdl2-image, cl-sendgrid, cl-skkserv, cl-soloud, cl-spidev, cl-ssh-keys, cl-steamworks, cl-str, cl-tcod, cl-tiled, cl-tls, cl-utils, cl-veq, cl-voipms, cl-vorbis, cl-wavefront, cl-webkit, cl-webmachine, cl-wol, cl-yxorp, clack, classowary, clingon, clip, clog, closer-mop, clss, clunit2, codex, coleslaw, colored, com-on, common-lisp-jupyter, computable-reals, conduit-packages, croatoan, crypto-shortcuts, ctype, cytoscape-clj, dartsclhashtree, data-frame, data-lens, data-table, datafly, datamuse, decompress, deeds, deferred, definitions, deploy, depot, dexador, dissect, djula, dml, dns-client, doc, documentation-utils, drakma, dynamic-classes, easter-gauss, easy-routes, ecclesia, eclector, enhanced-eval-when, enhanced-multiple-value-bind, erjoalgo-webutil, extensible-compound-types, f2cl, fare-scripts, fast-http, feeder, file-attributes, file-notify, file-select, filesystem-utils, fiveam, fiveam-matchers, flare, float-features, flow, font-discovery, for, form-fiddle, function-cache, functional-trees, gendl, github-api-cl, glsl-toolkit, gtirb-capstone, gtwiwtg, harmony, helambdap, humbler, hunchentoot-errors, iclendar, imago, inkwell, ironclad, journal, json-mop, jsonrpc, jzon, kekule-clj, khazern, lack, lambda-fiddle, language-codes, lass, legion, legit, let-over-lambda, lev, lichat-ldap, lichat-protocol, lichat-serverlib, lichat-tcp-client, lichat-tcp-server, lichat-ws-server, lift, lisp-binary, lisp-critic, lisp-interface-library, lisp-pay, lisp-stat, local-time, lquery, lru-cache, luckless, macro-level, maiden, math, mcclim, memory-regions, messagebox, mgl-mat, mgl-pax, mito, mmap, mnas-path, mnas-string, modularize, modularize-hooks, modularize-interfaces, multilang-documentation, multiposter, mutility, named-readtables, nibbles, ningle, nodgui, north, numerical-utilities, numpy-file-format, nytpu.lisp-utils, omglib, one-more-re-nightmare, openapi-generator, orizuru-orm, osicat, ospm, oxenfurt, pango-markup, parachute, parseq, pathname-utils, petalisp, piping, plot, plump, plump-bundle, plump-sexp, plump-tex, policy-cond, posix-shm, postmodern, ppath, prettier-builtins, promise, psychiq, punycode, purgatory, py4cl2-cffi, qlot, quickhull, random-state, ratify, reblocks, reblocks-auth, reblocks-prometheus, redirect-stream, rove, s-dot2, sc-extensions, scribble, sel, serapeum, sha3, shasht, shop3, si-kanren, simple-inferiors, simple-tasks, sketch, slite, sly, softdrink, south, speechless, spinneret, staple, statistics, stopclock, studio-client, stumpwm, sxql, system-locale, terrable, testiere, tfeb-lisp-hax, tfeb-lisp-tools, tiny-routes, tooter, trivial-arguments, trivial-benchmark, trivial-clipboard, trivial-custom-debugger, trivial-extensible-sequences, trivial-garbage, trivial-gray-streams, trivial-indent, trivial-main-thread, trivial-mimes, trivial-sanitize, trivial-thumbnail, trivial-timeout, trivial-utf-8, trucler, try, typo, uax-14, uax-9, ubiquitous, unboxables, vellum, vellum-csv, vellum-postmodern, verbose, websocket-driver, woo, xmls, yah, zippy.
Removed projects: cl-bson, cl-fastcgi, more-cffi, myweb, parse-number-range, quilc, qvm.
To get this update, use (ql:update-dist "quicklisp")
What's up with Quicklisp updates taking way longer than usual? A couple things.
First, life has been pretty crazy for me, and I'm the only one working on Quicklisp updates. If anyone wants to collaborate, please let me know. There are some simple things that could improve the time between releases.
Second, there are now enough things in Quicklisp that every month something is broken at a critical time when I'm planning a release. I need to work around this with some better management software, but that takes time and things are pretty crazy for me (see above).
I hope to get back on track for regular monthly releases soon. Thanks for your support and thanks for using Quicklisp.
Eugene Zaikonnikov — Announcing deptree
@2023-10-22 15:00 · 37 days agoDeptree is a tool to list and archive dependency snapshots of (ASDF-defined) projects. We at Norphonic use it in the product build pipeline, but it can be useful for integration workflows as well. The task sounds common enough so there's little doubt am reinventing the wheel with this. Alas, I couldn't find any readily available solutions nor good folks at #commonlisp could recall of any, so there.
Available in the latest Quicklisp.
Eugene Zaikonnikov — Also ALSA gets Mixer API
@2023-10-21 15:00 · 38 days agoAlso ALSA now has a simple ALSA Mixer API support. See set-mixer-element-volume for sample use.
Available in the latest Quicklisp.
vindarel — Common Lisp on the web: enrich your stacktrace with request and session data
@2023-10-13 14:51 · 46 days agoA short post to show the usefulness of Hunchentoot-errors and to thank Mariano again.
This library adds the current request and session data to your stacktrace, either in the REPL (base case) or in the browser.
TLDR;
Use it like this:
;; (ql:quickload "hunchentoot-errors)
;;
;; We also use easy-routes: (ql:quickload "easy-routes")
(defclass acceptor (easy-routes:easy-routes-acceptor hunchentoot-errors:errors-acceptor)
()
(:documentation "Our Hunchentoot acceptor that uses easy-routes and hunchentoot-errors, for easier route definition and enhanced stacktraces with request and session data."))
then (make-instance 'acceptor :port 4242)
.
Base case
Imagine you have a bug in your route:
(easy-routes:defroute route-card-page ("/card/:slug" :method :GET :decorators ((@check-roles admin-role)))
(&get debug)
(error "oh no"))
When you access localhost:4242/card/100-common-lisp-recipes
, you will see this in the REPL:
[2023-10-13 16:48:21 [ERROR]] oh no
Backtrace for: #<SB-THREAD:THREAD "hunchentoot-worker-127.0.0.1:53896" RUNNING {10019A21A3}>
0: (TRIVIAL-BACKTRACE:PRINT-BACKTRACE-TO-STREAM #<SB-IMPL::CHARACTER-STRING-OSTREAM {1006E9ED43}>)
1: (HUNCHENTOOT::GET-BACKTRACE)
2: ((FLET "H0" :IN HUNCHENTOOT:HANDLE-REQUEST) #<SIMPLE-ERROR "oh no" {1006E9EBE3}>)
3: (SB-KERNEL::%SIGNAL #<SIMPLE-ERROR "oh no" {1006E9EBE3}>)
4: (ERROR "oh no")
5: (MYWEBAPP/WEB::ROUTE-CARD-PAGE "100-common-lisp-recipes")
6: ((:METHOD HUNCHENTOOT:ACCEPTOR-DISPATCH-REQUEST (EASY-ROUTES:EASY-ROUTES-ACCEPTOR T)) #<MYWEBAPP/WEB::ACCEPTOR (host *, port 4242)> #<HUNCHENTOOT:REQUEST {1006C55F33}>) [fast-method]
7: ((:METHOD HUNCHENTOOT:HANDLE-REQUEST (HUNCHENTOOT:ACCEPTOR HUNCHENTOOT:REQUEST)) #<MYWEBAPP/WEB::ACCEPTOR (host *, port 4242)> #<HUNCHENTOOT:REQUEST {1006C55F33}>) [fast-method]
[...]
And, by default, you see a basic error message in the browser:
Show errors
Set this:
(setf hunchentoot:*show-lisp-errors-p* t)
Now you can see a backtrace in the browser window, which is of course super useful during development:
BTW, if you unset this one:
(setf hunchentoot:*show-lisp-backtraces-p* nil) ;; t by default
You will see the error message, but not the backtrace:
And I remind you that if you set *catch-errors-p*
to nil, you’ll get the debugger inside your IDE (Hunchentoot will not catch the errors, and will pass it to you).
Now with request and session data
Now create your server with our new acceptor, inheriting hunchentoot-errors.
You’ll see the current request and session paramaters both in the REPL:
[...]
19: (SB-THREAD::NEW-LISP-THREAD-TRAMPOLINE #<SB-THREAD:THREAD "hunchentoot-worker-127.0.0.1:48756" RUNNING {100AAAFC43}> NIL #<CLOSURE (LAMBDA NIL :IN BORDEAUX-THREADS::BINDING-DEFAULT-SPECIALS) {100AAAFBEB}> NIL)
20: ("foreign function: call_into_lisp")
21: ("foreign function: new_thread_trampoline")
HTTP REQUEST:
uri: /card/100-common-lisp-recipes
method: GET
headers:
HOST: localhost:4242
USER-AGENT: Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0
ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
ACCEPT-LANGUAGE: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3
ACCEPT-ENCODING: gzip, deflate, br
DNT: 1
CONNECTION: keep-alive
COOKIE: "..."
UPGRADE-INSECURE-REQUESTS: 1
SEC-FETCH-DEST: document
SEC-FETCH-MODE: navigate
SEC-FETCH-SITE: none
SEC-FETCH-USER: ?1
SESSION:
:USER: #<MYWEBAPP.MODELS:USER {100EA8C753}>
127.0.0.1 - [2023-10-13 17:32:18] "GET /card/100-common-lisp-recipes HTTP/1.1" 500 5203 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0"
and in the browser:
(notice the #<USER {...}>
at the bottom? You’ll need a commit from today to see it, instead of #
only)
Final words
These Hunchentoot variables were kinda explained on the Cookbook/web.html, I’ll augment that.
Clack users can use the clack-errors
midleware.
Who wants to send a PR for colourful stacktraces?
Joe Marshall — Syntax-rules Primer
@2023-10-13 09:17 · 46 days agoI recently had an inquiry about the copyright status of my JRM’s Syntax Rules Primer for the Merely Eccentric. I don’t want to put it into public domain as that would allow anyone to rewrite it at will and leave that title. Instead, I'd like to release it on an MIT style license: feel free to copy it and distribute it, correct any errors, but please retain the general gist of the article and the title and the authorship.
Tim Bradshaw — Symbol nicknames: a broken toy
@2023-10-12 14:08 · 47 days agoSymbol nicknames allows multiple names to refer to the same symbol in supported implementations of Common Lisp. That may or may not be useful.
People often say the Common Lisp package system is deficient. But a lot of the same people write code which is absolutely full of explicit package prefixes in what I can only suppose is an attempt to make programs harder to read. Somehow this is meant to be made better by using package-local nicknames for packages. And let’s not mention the unspeakable idiocy that is thinking that a package name like, say, XML
is suitable for any kind of general use at all. So forgive me if I don’t take their concerns too seriously.
The CL package system can’t do all the things something like the Racket module system can do. But it’s not clear that, given its job of collecting symbols into, well, packages, it could do that much more than it currently does. Probably some kind of ‘package universe’ notion such as Symbolics Genera had would be useful. But the namespace has to be anchored somewhere, and if you’re willing to give packages domain-structured names in the obvious way and spend time actually constructing a namespace for the language you want to use, it’s perfectly pleasant in my experience.
One thing that might be useful is to allow multiple names to refer to the same symbol. So for instance you might want to have eq?
be the same symbol as eq
:
> (setf (nickname-symbol "EQ?") 'eq)
eq
> (eq 'eq? 'eq)
t
> (eq? 'eq 'eq?)
t
This allows you to construct languages which have different names for things, but where the names are translated to the underlying name efficiently. As another example, let’s say you wanted to call eql
equivalent-p
:
> (setf (nickname-symbol "EQUIVALENT-P") 'eql)
eql
> (eql 'eql 'equivalent-p)
t
Well, now you can use equivalent-p
as a synonym for eql
wherever it occurs:
> (defmethod foo ((x (equivalent-p 1)))
"x is 1")
#<standard-method foo nil ((eql 1)) 801005BD23>
> (foo 1)
"x is 1"
Symbol nicknames is not completely portable as it requires hooking string-to-symbol lookup. It is supported in LispWorks and SBCL currently: it will load in other Lisps but will complain that it can’t infect them.
Symbol nicknames is also not completely compatible with CL. In CL you can assume that (find-symbol "FOO")
either returns a symbol whose name is "FOO"
or nil
and nil
: with symbol nicknames you can’t. In the case where a nickname link has been followed the second value of find-symbol
will be :nickname
.
Symbol nicknames is a toy. I am not convinced that the idea is even useful, and if it is it probably needs to be thought about more than I have.
But it exists.
TurtleWare — Proxy Generic Function
@2023-10-03 00:00 · 56 days agoIt is often hard to refactor software implementing an independent specification. There are already clients of the API so we can't remove operators, and newly added operators must play by the specified rules. There are a few possibilities: break the user contract and make pre-existing software obsolete, or abandon some improvements. There is also an option that software is written in Common Lisp, so you can eat your cake and have it too.
CLIM has two protocols that have a big overlap: sheets and output records. Both abstractions are organized in a similar way and have equivalent operators. In this example let's consider a part of the protocol for managing hierarchies:
;; Sheet hierarchy (sub-)protocol with an example implementation.
(defclass sheet () ()) ; protocol class
(defclass example-sheet (sheet)
((children :initform '() :accessor sheet-children)))
(defgeneric note-sheet-adopted (sheet)
(:method (sheet) nil))
(defgeneric note-sheet-disowned (sheet)
(:method (sheet) nil))
(defgeneric adopt-sheet (parent child)
(:method ((parent example-sheet) child)
(push child (sheet-children parent))
(note-sheet-adopted child)))
(defgeneric disown-sheet (parent child &optional errorp)
(:method ((parent example-sheet) child &optional (errorp t))
(and errorp (assert (member child (sheet-children parent))))
(setf (sheet-children parent)
(remove child (sheet-children parent)))
(note-sheet-disowned child)))
;; Output record hierarchy (sub-)protocol with an example implementation.
(defclass output-record () ()) ; protocol class
(defclass example-record (output-record)
((children :initform '() :accessor output-record-children)))
(defgeneric add-output-record (child parent)
(:method (child (parent example-record))
(push child (output-record-children parent))))
(defgeneric delete-output-record (child parent &optional errorp)
(:method (child (parent example-record) &optional (errorp t))
(and errorp (assert (member child (sheet-children parent))))
(setf (output-record-children parent)
(remove child (output-record-children parent)))))
Both protocols are very similar and do roughly the same thing. We are tempted to flesh out a single protocol to reduce the cognitive overhead when dealing with hierarchies.
;; The mixin is not strictly necessary - output records and sheets may have
;; wildly different internal structures - this is for the sake of simplicity;
;; most notably it is _not_ a protocol class. We don't do protocol classes.
(defclass node-mixin ()
((scions :initform '() :accessor node-scions)))
(defgeneric note-node-parent-changed (node parent adopted-p)
(:method (node parent adopted-p)
(declare (ignore node parent adopted-p))
nil))
(defgeneric insert-node (elder scion)
(:method :after (elder scion)
(note-node-parent-changed scion elder t))
(:method ((elder node-mixin) scion)
(push scion (node-scions elder))))
(defgeneric delete-node (elder scion)
(:method :after (elder scion)
(note-node-parent-changed scion elder nil))
(:method ((elder node-mixin) scion)
(setf (node-scions elder) (remove scion (node-scions elder)))))
We define a mixin class for simplicity. In principle we care only about the new protocol and different classes may have different internal representations. Now that we have a brand new unified protocol, it is time to rewrite the old code:
;; Sheet hierarchy (sub-)protocol with an example implementation.
(defclass sheet () ()) ; protocol class
(defclass example-sheet (node-mixin sheet) ())
(defgeneric note-sheet-adopted (sheet)
(:method (sheet)
(declare (ignore sheet))
nil))
(defgeneric note-sheet-disowned (sheet)
(:method (sheet)
(declare (ignore sheet))
nil))
(defmethod note-node-parent-changed :after ((sheet sheet) parent adopted-p)
(declare (ignore parent))
(if adopted-p
(note-sheet-adopted sheet)
(note-sheet-disowned sheet)))
(defgeneric adopt-sheet (parent child)
(:method (parent child)
(insert-node parent child)))
(defgeneric disown-sheet (parent child &optional errorp)
(:method (parent child &optional (errorp t))
(and errorp (assert (member child (node-scions parent))))
(delete-node parent child)))
;; Output record hierarchy (sub-)protocol with an example implementation.
(defclass output-record () ()) ; protocol class
(defclass example-record (node-mixin output-record) ())
(defgeneric add-output-record (child parent)
(:method (child parent)
(insert-node parent child)))
(defgeneric delete-output-record (child parent &optional errorp)
(:method (child parent &optional (errorp t))
(and errorp (assert (member child (node-scions parent))))
(delete-node parent child)))
Peachy! Now we can call (delete-node parent child)
and this will work equally
well for both sheets and output records. It is time to ship the code and boost
how clever we are (and advertise the new API). After a weekend we realize that
there is a problem with our solution!
Since the old API is alive and kicking, the user may still call adopt-sheet
,
or if they want to switch to the new api they may call insert-node
. This is
fine and we have rewritten all our code so that the new element will always be
added. But what about user methods?
There may be a legacy code that defines its additional constraints, for example:
(defvar *temporary-freeze* nil)
(defmethod add-output-record :before (child (record output-record))
(declare (ignore child record))
(when *temporary-freeze*
(error "No-can-do's-ville, baby doll!")))
When the new code calls insert-node
, then this method won't be called and the
constraint will fail. There is an interesting idea, that perhaps instead of
trampolining from the sheet protocol to the node protocol functions we could do
it the other way around: specialized node protocol methods will call the sheet
protocol functions. This is futile - the problem is symmetrical. In that case if
some legacy code calls adopt-sheet
, then our node methods won't be called.
That's quite a pickle we are in. The main problem is that we are not in control of all definitions and the cat is out of the bag. So what about the cake? The cake is a lie of course! … I'm kidding, of course there is the cake.
When Common Lisp programmers encounter a problem that seems impossible to solve, they usually think of one of three solutions: write a macro, write a dsl compiler or use the metaobject protocol. Usually the solution is a mix of these three things. We are dealing with generic functions - the MOP it is.
The problem could be summarized as follows:
- We have under our control a new function that implements the program logic
- We have under our control old functions that call the new function
- We have legacy methods outside of our control defined on old functions
- We will have new methods outside of our control defined on the new function
- Sometimes lambda lists between protocols are not compatible
We want the new function to call legacy methods when invoked, and we want to
ensure that old functions always call the new function (i.e it is not possible
for legacy (sheet-disown-child :around)
methods to bypass delete-node
).
In order to do that, we will define a new generic function class responsible for
mangling arguments when the method is called with make-method-lambda
, and
proxying add-method
to the target class. That's all. When a new legacy method
is added to the generic function sheet-disown-child
, then it will be hijacked
and added to the generic function delete-node
instead.
First some syntactic sugar. defgeneric
is a good operator except that it does
error when we pass options that are not specified. Moreover some compilers are
tempted to macroexpand methods at compile time, so we'll expand the new macro in
the dynamic environment of a definition:
(eval-when (:compile-toplevel :load-toplevel :execute)
(defun mappend (fun &rest lists)
(loop for results in (apply #'mapcar fun lists) append results)))
;;; syntactic sugar -- like defgeneric but accepts unknown options
(defmacro define-generic (name lambda-list &rest options)
(let ((declarations '())
(methods '()))
(labels ((parse-option (option)
(destructuring-bind (name . value) option
(case name
(cl:declare
(setf declarations (append declarations value))
nil)
(:method
(push value methods)
nil)
((:documentation :generic-function-class :method-class)
`(,name (quote ,@value)))
((:argument-precedence-order :method-combination)
`(,name (quote ,value)))
(otherwise
`(,name (quote ,value))))))
(expand-generic (options)
`(c2mop:ensure-generic-function
',name
:name ',name :lambda-list ',lambda-list
:declarations ',declarations ,@options))
(expand-method (method)
`(c2mop:ensure-method (function ,name) '(lambda ,@method))))
;; We always expand to ENSURE-FOO because we want dynamic variables like
;; *INSIDE-DEFINE-PROXY-P* to be correctly bound during the creation..
`(progn
,(expand-generic (mappend #'parse-option options))
,@(mapcar #'expand-method methods)))))
Now we will add a macro that defines a proxy generic function. We include a
dynamic flag that will communicte to make-method-lambda
and add-method
function, that we are still in the initialization phase and methods should be
added to the proxy generic function:
(defvar *inside-define-proxy-p* nil)
(defmacro define-proxy-gf (name lambda-list &rest options)
`(let ((*inside-define-proxy-p* t))
(define-generic ,name ,lambda-list
(:generic-function-class proxy-generic-function)
,@options)))
The proxy generic function may have a different lambda list than the target.
That's indeed the case with our protocol - we don't have the argument errorp
in the function delete-node
. We want to allow default methods in order to
implement that missing behavior. We will mangle arguments according to the
specified template in :mangle-args
in the function mangle-args-expressoin
.
(defclass proxy-generic-function (c2mop:standard-generic-function)
((target-gfun :reader target-gfun)
(target-args :initarg :target-args :reader target-args)
(mangle-args :initarg :mangle-args :reader mangle-args))
(:metaclass c2mop:funcallable-standard-class)
(:default-initargs :target-gfun (error "~s required" :target-gfun)
:target-args nil
:mangle-args nil))
(defmethod shared-initialize :after ((gf proxy-generic-function) slot-names
&key (target-gfun nil target-gfun-p))
(when target-gfun-p
(assert (null (rest target-gfun)))
(setf (slot-value gf 'target-gfun)
(ensure-generic-function (first target-gfun)))))
To ensure that a proxied method can invoke call-next-method
we must be able to
mangle arguments both ways. The target generic functions lambda list is stated
verbatim in :target-args
argument, while the source generic function lambda
list is read from c2mop:generic-function-lambda-list
.
The function make-method-lambda
is tricky to get it right, but it gives quite
a bit of control over the method invocation. Default methods are added normally
so we don't mangle arguments in the trampoline method, otherwise we convert the
target call into the lambda list of a defined method:
;;; MAKE-METHOD-LAMBDA is expected to return a lambda expression compatible with
;;; CALL-METHOD invocations in the method combination. The first argument are
;;; the prototype generic function arguments (the function a method is initially
;;; defined for) and the reminder are all arguments passed to CALL-METHOD - in a
;;; default combination there is one such argument - next-methods. The second
;;; returned value are extra initialization arguments for the method instance.
;;;
;;; Our goal is to construct a lambda expression that will construct a function
;;; which instead of the prototype argument list accepts the proxied function
;;; arguments and mangles them to call the defined method body. Something like:
;;;
#+ (or)
(lambda (proxy-gfun-call-args &rest call-method-args)
(flet ((original-method (method-arg-1 method-arg-2 ...)))
(apply #'original-method (mangle-args proxy-gfun-call-args))))
(defun mangle-args-expression (gf type args)
(let ((lambda-list (ecase type
(:target (target-args gf))
(:source (c2mop:generic-function-lambda-list gf)))))
`(destructuring-bind ,lambda-list ,args
(list ,@(mangle-args gf)))))
(defun mangle-method (gf gf-args lambda-expression)
(let ((mfun (gensym)))
`(lambda ,(second lambda-expression)
;; XXX It is not conforming to shadow locally CALL-NEXT-METHOD. That said
;; we subclass C2MOP:STANDARD-GENERIC-FUNCTION and they do that too(!).
(flet ((call-next-method (&rest args)
(if (null args)
(call-next-method)
;; CALL-NEXT-METHOD is called with arguments are meant for
;; the proxy function lambda list. We first need to destruct
;; them and then mangle again.
(apply #'call-next-method
,(mangle-args-expression gf :target
(mangle-args-expression gf :source 'args))))))
(flet ((,mfun ,@(rest lambda-expression)))
(apply (function ,mfun) ,(mangle-args-expression gf :target gf-args)))))))
(defmethod c2mop:make-method-lambda
((gf proxy-generic-function) method lambda-expression environment)
(declare (ignorable method lambda-expression environment))
(if (or *inside-define-proxy-p* (null (mangle-args gf)))
(call-next-method)
`(lambda (proxy-args &rest call-method-args)
(apply ,(call-next-method gf method (mangle-method gf 'proxy-args lambda-expression) environment)
proxy-args call-method-args))))
That leaves us with the last method add-method
that decides where to add the
method - to the proxy function or to the target function.
(defmethod add-method ((gf proxy-generic-function) method)
(when *inside-define-proxy-p*
(return-from add-method (call-next-method)))
;; The warning will go away in the production code because we don't want to
;; barf at a normal client code.
(warn "~s is deprecated, please use ~s instead."
(c2mop:generic-function-name gf)
(c2mop:generic-function-name (target-gfun gf)))
(if (or (typep method 'c2mop:standard-accessor-method) (null (mangle-args gf)))
;; XXX readers and writers always have congruent lambda lists so this should
;; be fine. Besides we don't know how to construct working accessors on some
;; (ekhm sbcl) implementations, because they have problems with invoking
;; user-constructed standard accessors (with passed :SLOT-DEFINITION SLOTD).
(add-method (target-gfun gf) method)
(let* ((method-class (class-of method))
(old-lambda-list (c2mop:generic-function-lambda-list gf))
(new-lambda-list (target-args gf))
(new-specializers (loop with spec = (c2mop:method-specializers method)
for arg in new-lambda-list
until (member arg '(&rest &optional &key))
collect (nth (position arg old-lambda-list) spec)))
;; It would be nice if we could reinitialize the method.. but we can't.
(new-method (make-instance method-class
:lambda-list new-lambda-list
:specializers new-specializers
:qualifiers (method-qualifiers method)
:function (c2mop:method-function method))))
(add-method (target-gfun gf) new-method))))
That's it. We've defined a new generic function class that allows specifying proxies. Now we can replace definitions of generic functions that are under our control. The new (the final) implementation looks like this:
;; Sheet hierarchy (sub-)protocol with an example implementation.
(defclass sheet () ()) ; protocol class
(defclass example-sheet (node-mixin sheet) ())
(defgeneric note-sheet-adopted (sheet)
(:method (sheet)
(declare (ignore sheet))
nil))
(defgeneric note-sheet-disowned (sheet)
(:method (sheet)
(declare (ignore sheet))
nil))
(defmethod note-node-parent-changed :after ((sheet sheet) parent adopted-p)
(declare (ignore parent))
(if adopted-p
(note-sheet-adopted sheet)
(note-sheet-disowned sheet)))
(define-proxy-gf adopt-sheet (parent child)
(:target-gfun insert-node)
(:target-args parent child)
(:mangle-args parent child)
(:method (parent child)
(insert-node parent child)))
(define-proxy-gf disown-sheet (parent child &optional errorp)
(:target-gfun delete-node)
(:target-args parent child)
(:mangle-args parent child nil)
(:method (parent child &optional (errorp t))
(and errorp (assert (member child (node-scions parent))))
(delete-node parent child)))
;; Output record hierarchy (sub-)protocol with an example implementation.
(defclass output-record () ()) ; protocol class
(defclass example-record (node-mixin output-record) ())
(define-proxy-gf add-output-record (child parent)
(:target-gfun insert-node)
(:target-args parent child)
(:mangle-args child parent)
(:method (child parent)
(insert-node parent child)))
(define-proxy-gf delete-output-record (child parent &optional errorp)
(:target-gfun insert-node)
(:target-args parent child)
(:mangle-args child parent)
(:method (child parent &optional (errorp t))
(and errorp (assert (member child (node-scions parent))))
(delete-node parent child)))
And this code is defined in a separate compilation unit:
;; Legacy code in a third-party library.
(defvar *temporary-freeze* nil)
(defmethod add-output-record :before (child (record output-record))
(declare (ignore child))
(when *temporary-freeze*
(error "No-can-do's-ville, baby doll!")))
;; Bleeding edge code in an experimental third-party library.
(defvar *logging* nil)
(defmethod insert-node :after ((record output-record) child)
(declare (ignore child))
(when *logging*
(warn "The record ~s has been extended!" record)))
Dare we try it? You bet we do!
(defparameter *parent* (make-instance 'example-record))
(defparameter *child1* (make-instance 'example-record))
(defparameter *child2* (make-instance 'example-record))
(defparameter *child3* (make-instance 'example-record))
(defparameter *child4* (make-instance 'example-record))
(defparameter *child5* (make-instance 'example-record))
(add-output-record *child1* *parent*)
(print (node-scions *parent*)) ;1 element
(insert-node *parent* *child2*)
(print (node-scions *parent*)) ;1 element
;; So far good!
(let ((*temporary-freeze* t))
(handler-case (adopt-sheet *parent* *child3*)
(error (c) (print `("Good!" ,c)))
(:no-error (c) (print `("Bad!!" ,c))))
(handler-case (add-output-record *child3* *parent*)
(error (c) (print `("Good!" ,c)))
(:no-error (c) (print `("Bad!!" ,c))))
(handler-case (insert-node *parent* *child3*)
(error (c) (print `("Good!" ,c)))
(:no-error (c) (print `("Bad!!" ,c)))))
;; Still perfect!
(let ((*logging* t))
(handler-case (adopt-sheet *parent* *child3*)
(error (c) (print `("Bad!" ,c)))
(warning (c) (print `("Good!",c))))
(handler-case (add-output-record *child4* *parent*)
(error (c) (print `("Bad!" ,c)))
(warning (c) (print `("Good!",c))))
(handler-case (insert-node *parent* *child5*)
(error (c) (print `("Bad!" ,c)))
(warning (c) (print `("Good!",c)))))
(print `("We should have 5 children -- " ,(length (node-scions *parent*))))
(print (node-scions *parent*))
This solution has one possible drawback. We add methods from the proxy generic
function to the target generic function without discriminating. That means that
applicable methods defined on adopt-sheet
are called when add-output-record
is invoked (and vice versa). Moreover methods with the same set of specializers
in the target function may replace each other. On the flip side this is what we
arguably want – the unified protocol exhibits full behavior of all members. We
could have mitigated this problem by signaling an error for conflicting methods
from different proxies, but if you think about it, a conforming program must not
define methods that are not specialized on a subclass of the standard class -
otherwise they risk overwriting internal methods! In other words all is good.
Edit 1 Another caveat is that methods for the proxy generic function must be
defined in a different compilation unit than the function. This is because of
limitations of defmethod
- the macro calls make-method-lambda
when it is
expanding the body (at compile time), while the function definition is processed
at the execution time.
That means that make-method-lambda
during the first compilation will be called
with a standard-generic-function
prototype and the proxy won't work.
Edit 2 To handle correctly call-next-method
we need to shadow it. That is
not conforming, but works when we subclass c2mop:standard-generic-function
. As
an alternative we could write a full make-method-lambda
expansion that defines
both call-next-method
and next-method-p
.
Cheers!
Daniel
P.S. if you like writing like this you may consider supporting me on Patreon.
Joe Marshall
@2023-09-27 16:31 · 62 days agoGreenspun's tenth rule of programming states
Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug ridden, slow implementation of half of Common Lisp.Observe that the Python interpreter is written in C.
In fact, most popular computer languages can be thought of as a poorly implemented Common Lisp. There is a reason for this. Church's lambda calculus is a great foundation for reasoning about programming language semantics. Lisp can be seen as a realization of a lambda calculus interpreter. By reasoning about a language's semantics in Lisp, we're essentially reasoning about the semantics in a variation of lambda calculus.
Paolo Amoroso — Exploring Medley as a Common Lisp development environment
@2023-09-25 09:38 · 64 days agoSince encountering Medley I gained considerable experience with Interlisp. Medley Interlisp is a project for preserving, reviving, and modernizing the Interlisp-D software development environment of the Lisp Machines Xerox created at PARC.
Nine months later I know enough to find my way around and confidently use most of the major system tools and features.
I read all the available documentation, books, and publications, so I know where to look for information. And I undertook Interlisp programming projects such as Stringscope, Braincons, Sysrama, and Femtounit.
Now I'm ready to explore Medley as a Common Lisp development environment.
Although most of the system, facilities, and tools are written in and designed around Interlisp, the companies that maintained and marketed Medley over time partially implemented Common Lisp and integrated it with the environment. The completion level of the implementation is somewhere between CLtL1 and CLtL2, plus CLOS via Portable Common Loops (PCL).
Motivation
I want to widen this experience to Common Lisp.
I'll leverage the more advanced Lisp dialect and interface with Interlisp's facilities as an application platform that comprises a rich set of libraries and tools such a window system, graphics primitives, menu facilities, and GUI controls for building applications. Each world can interoperate with the other, so Common Lisp functions can call Interlisp ones and the other way around.
Developing Common Lisp programs with Medley is both my goal and a way of achieving it through practice. Medley is an ideal self-contained computing universe for my personal projects and Common Lisp greatly enchances its toolbox.
Tools
The main tools for developing Common Lisp code are the same as for Interlisp: the SEdit structure editor for writing code; the File Manager, a make
-like tool for tracking changes to Lisp objects in the running image and saving them to files; and the Executive (or Exec), the Lisp listener.
However, the workflow is subtly different.
In some cases taking advantage of the integration with Medley involves different steps for Common Lisp code. For example, defining and changing packages so that the File Manager notices and tracks them needs to be done in a certain order. And there are Medley extensions to the package forms.
When working with Common Lisp I open at least two Execs, a Common Lisp and an Interlisp one. The former is for testing, running, and evaluating Common Lisp code.
The Interlisp Exec is for launching system tools and interacting with the File Manager. Since all the symbols of SEdit, the File Manager, and other system tools are in the IL
Interlisp package, in an Interlisp Exec it's not necessary to add package qualifiers to symbols all the time.
Exec commands such as DIR
and CD
work the same in both Execs.
Documentation
Medley's Common Lisp features aren't documented in the Interlisp Reference Manual, the main information source about the system. The reason is the companies that distributed and maintained the product ceased operations before the work on implementing and documenting Common Lisp was completed.
I found only a couple of good sources on Common Lisp under Medley.
The implementation notes and the release notes of Lyric, the music-themed codename of one of Interlisp-D's versions, provide an overview of the integration between Common Lisp and Medley. The release notes of Medley 1.0, a later version, expand on this. Issue 5 of HOTLINE!, a newsletter Xerox published for its Lisp customers, has useful step by step examples of creating and managing Common Lisp packages the Medley way.
Some of the system code of Medley is written in Common Lisp and may be a source of usage examples and idioms. I'm also writing Common Lisp code snippets to test my understanding of the integration with Medley.
Discuss... Email | Reply @amoroso@fosstodon.org
vindarel — I published 17 videos about Common Lisp macros - learn Lisp with a code-first tutorial 🎥 ⭐
@2023-09-15 15:07 · 74 days agoFor those who don’t know and who didn’t see the banner :D I am creating a Common Lisp course on the Udemy platform (with complementary videos on Youtube). I wanted to do something different and complementary than writing on the Cookbook.
I worked on new videos this summer and I just finished editing the subtitles. I have added 17 videos (worth 1h30+ of code-driven content) about Common Lisp macros!
We cover a lot of content: quote, backquote and comma, “,@”, comparison with C macros, comparison with functions, GENSYM and variable capture, useful patterns (call-with...), compile-time computing, read-time evaluation... (full summary below)
- find the course here: https://www.udemy.com/course/common-lisp-programming/?couponCode=LISPMACROSPOWER (various videos are free to watch, so you can judge, and learn a couple things) (I can send free links to students, plz PM)
I recorded the last one, about the MACROSTEP tool, inside the Lem editor. It’s short, you should have a look at how this new editor looks like. (I’m very excited about it. Did I say I started develop a Magit-like plugin for it?)
Who is this course for?
The whole course is for beginners in Lisp, although not total beginners in programming. This chapter is, logically, a bit more difficult than the others. If you didn’t write small Common Lisp programs yet, be gentle with yourself and stop if you don’t understand. (you can ask questions in the Udemy forum, of course) In your case I would advise to watch the introductory one, the comparison with C macros, the video on QUOTE, the “functions VS macros” one, and then carry on at your rhythm. Be sure to work on the previous chapters before tackling this one.
Content
This is what we see on the topic of macros. For a full overview of the course, what I want to do next (if you subscribe now, you’ll get new content for the same price) and read others’ feedback, see its GitHub project page (there are six more chapters including getting started, functions, iteration, condition handling...).
Table of Contents
- Content
- 7.1 A quick intro (FREE PREVIEW)
- 7.2. A comparison with C macros (FREE PREVIEW)
- 7.3 QUOTE (FREE PREVIEW)
- 7.4 Backquote and comma
- 7.5 How to spot you are using a macro
- 7.6 Functions vs macros
- 7.7 COMMA SPLICE ,@ the third most important macro mechanism
- 7.8 &body and other macro parameters. Our second macro model.
- 7.9 Putting this together: with-echo macro. Macroexpand in use.
- 7.10 GENSYM -the simple fix to the most dangerous macros gotcha
- 7.11 CALL-WITH pattern: simplifying macros
- 7.12 Compile time computing
- 7.13 Lists VS AST
- 7.14 Two example macros for compile-time computing
- 7.15 SYMBOL-MACRO
- 7.16 Read-time evaluation with #.
- 7.17 EDITOR TOOL: macrostep (FREE PREVIEW, Lem demo)
- Thanks
7.1 A quick intro (FREE PREVIEW)
Macros do not evaluate their arguments and expand to new code at compile time. What does that mean? A quick intro before diving deeper.
7.2. A comparison with C macros (FREE PREVIEW)
Lisp macros are NOT manipulating text, unlike C. Text leads to many unnecessary problems. We have a fun tour of a trivial need yet complicated issue in C that is easily done in Common Lisp.
7.3 QUOTE (FREE PREVIEW)
QUOTE does not evaluate its argument.
What we see: how to use QUOTE outside macros. Data takes the shape of code. We pair it with eval and we go full circle. We introduce the need to extrapolate values inside a quote.
7.4 Backquote and comma
What we see: how we extrapolate variable values. How they can help create data structures. Real world examples.
7.5 How to spot you are using a macro
Four tips to recognize if you are using a function or a macro, and why it matters.
7.6 Functions vs macros
Macros do NOT replace functions!
What we see: they are not higher-level functions. The subtle but logic need to re-compile functions using macros.
Introducing MACROEXPAND.
Keeping compile-time computing in mind (more on that later). A look at a function’s disassembly. So... you might not need a macro yet ;)
7.7 COMMA SPLICE ,@ the third most important macro mechanism
What we see: when use it, understanding the common error messages, passing body forms to our macro. Our first macro model.
7.8 &body and other macro parameters. Our second macro model.
What we see: how &body differs to &rest. Macro parameters: lots of possibilities, but some conventions carry meaning. Our own DOLIST macro. Our second macro model you can follow.
7.9 Putting this together: with-echo macro. Macroexpand in use.
We build our first macro with backquote and comma-splice, even a quote followed by a comma. We use macroexpand.
7.10 GENSYM -the simple fix to the most dangerous macros gotcha
What we see: what is variable capture and how to avoid it. Writing our own REPEAT macro. A little discussion about Common Lisp VS Scheme macros. GENSYM can be used outside macros too.
At this point you know enough to write all common macros. See the exercises for easy and not-so-easy ones.
7.11 CALL-WITH pattern: simplifying macros
We saw there can be subtle pitfalls when we write a macro. This pattern allows to offload most of the work to a function, which presents many advantages. We demo with our REPEAT macro.
7.12 Compile time computing
When writing macros, we have the full power of Common Lisp at compile time. This gives great tools to the developer: early type errors and warnings, faster runtime.
What we see: a simple example, writing a scientific macro for conversion of unit at compile time, existing libraries for that, introduction to dispatching macro characters and reader macros.
7.13 Lists VS AST
What we see: other languages don’t have macros but can manipulate Abstract Syntax Trees. Code as lists of symbols is not the same, we would need a third-party library to manipulate a Lisp AST proper. This doesn’t prevent us to develop crazy macros though, see this library adding Haskell-like type checking on top of Common Lisp, in pure CL macros.
7.14 Two example macros for compile-time computing
defstar allows to specify a function’s arguments’ types, Serapeum’s ecase-of does exhaustiveness type checking. At compile time, of course.
7.15 SYMBOL-MACRO
A symbol macro is not your everyday Lisp development tool, but it expands your toolbet. Again.
7.16 Read-time evaluation with #.
Macros occur at compile-time. But Common Lisp blurs the lines between read time, compile time and run time. This allows to execute code at READ time.
7.17 EDITOR TOOL: macrostep (FREE PREVIEW, Lem demo)
Macrostep is an editor extension that helps understand our macro expansions. It is only available in Sly and Lem. We demo with the Lem editor.
Thanks
Thanks for your support, it does make a difference (I am self employed, I don’t earn millions and I’d love to spend *even more time* on CL resources and projects). If you want to learn what I do for the Lisp community and why you should buy my course, read more on Github.
My complementary Lisp videos are on Youtube.
Don’t hesitate to share the link with a friend or a colleague :) Thanks, and happy lisping.
A demo about web development has been recorded and is coming.
ps: we just got a Dockerfile for CIEL, which is then easier to test, thanks to a “student” of my course. Thanks, @themarcelor. It will be on Dockerhub in due time.
The Udemy course by @vindarel is the best introductory material for a fast and practical intro to Common Lisp.
(thanks <3)
A wonderful course for someone with cursory knowledge of lisp. I’ve dipped my feet many times now, but always struggled to wrap my head around everything. This course really helped give me greater confidence in how to start a project. I really enjoyed the focus on having an executable early. The Lisp-2 reveal was beautiful and made me finally understand the difference. Thanks a lot!
Simon, August of 2023. (thanks <3 )
Yukari Hafner — I've opened up a Patreon - Confession 93
@2023-08-25 12:45 · 95 days agoI've been debating opening up a Patreon for many years and I've always been hesitant about accepting donations from people, but I think it's finally time to change my mind on that!
Why make a Patreon now?
I've been working full time on Kandria and associated projects since 2020, and continue to do so today. All of the work that I've done as part of that has been released as open source software, including Kandria itself as well as the engine it runs on, Trial.
Since the release, I've mostly focused on support and the pre-pre-production of my next title, which primarily involves adding new features to Trial that are necessary to create a full-3D game. I can't yet announce much about the game itself, other than that it is a character action game, meaning it features third-person hack and slash focused on slick and satisfying combat.
Unfortunately the release of Kandria has not gone as well as I would have liked, and revenue from it is minimal. Most months I receive only about 200 bucks from Steam, which as you might imagine is not enough to sustain myself full-time, let alone any other artists that are necessary to produce another high-quality game.
So I am finally opening myself up for continued public funding. I know people have wanted to support me in the past before, and I've always been hesitant about accepting that. But now with the financial pressure increasing, I think it's finally time to let people that want to be generous, actually be generous!
What can I expect from this?
Aside from simply funding my existence and allowing me to continue to produce high-quality open source libraries and applications, art, writing, and games, I'm also committing to a couple of extra features:
Every month I'll produce a patron-only update about what's currently happening with the development. This will also include development insight and details that won't be published elsewhere.
I'll also commit to a monthly art stream where I doodle around, and higher-tier patrons can request sketches from me.
Any patron will be able to submit their name or a name of their choosing for inclusion in the credits of any game in production during their backing.
Higher-tier patrons will also receive access to early game prototypes and demos.
You'll be able to directly ask me questions in the comments of the monthly updates and in the stream chat.
If you use Discord, you'll receive access to a special role and patron-exclusive chatroom on my Discord server.
An eternal feeling of debt and gratitude towards you.
What now?
Now I'm going to go back to working on Trial and the unannounced game. In the meantime, please consider backing me. There should already be a monthly update about the state of things out that's only accessible to patrons. In any case, thank you very much for your continued support, and I hope I'll be able to see you among the backer list soon!
Gábor Melis — On Multifaceted Development and the Role of Documentation
@2023-08-17 00:00 · 103 days agoCatchy title, innit? I came up with it while trying to name the development style PAX enables. I wanted something vaguely self-explanatory in a straight out of a marketing department kind of way, with tendrils right into your unconscious. Documentation-driven development sounded just the thing, but it's already taken. Luckily, I came to realize that neither documentation nor any other single thing should drive development. Less luckily for the philosophically disinclined, this epiphany unleashed my inner Richard P. Gabriel. I reckon if there is a point to what follows, it's abstract enough to make it hard to tell.
In programming, there is always a formalization step involved: we must go from idea to code. Very rarely, we have a formal definition of the problem, but apart from purely theoretical exercises, formalization always involves a jump of faith. It's like math word problems: the translation from natural to formal language is out of the scope of formal methods.
We strive to shorten the jump by looking at the solution carefully from different angles (code, docs, specs), and by poking at it and observing its behaviour (tests, logs, input-output, debugging). These facets (descriptive or behavioural) of the solution are redundant with the code and each other. This redundancy is our main tool to shorten the jump. Ultimately, some faith will still be required, but the hope is that if a thing looks good from several angles and behaves well, then it's likely to be a good solution. Programming is empirical.
Tests, on the abstract level, have the same primary job as any other facet: constrain the solution by introducing redundancy. If automatic, they have useful properties: 1. they are cheap to run; 2. inconsistencies between code and tests are found automatically; 3. they exert pressure to keep the code easily testable (when tracking test coverage); 4. sometimes it's easiest to start with writing the tests. On the other hand, tests incur a maintenance cost (often small compared to the gains).
Unlike tests, documentation is mostly in natural language. This has the following considerable disadvantages: documentation is expensive to write and to check (must be read and compared to the implementation, which involves humans for a short while longer), consequently, it easily diverges from the code. It seems like the wrong kind of redundancy. On the positive side, 1. it is valuable for users (e.g. user manual) and also for the programmer to understand the intention; 2. it encourages easily explainable designs; 3. sometimes it's easiest to start with writing the documentation.
Like tests or any other facet, documentation is not always needed, it can drive the development process, or it can lag. But it is a tremendously useful tool to encourage clean design and keep the code comprehensible.
Writing and maintaining good documentation is costly, but the cost can vary greatly. Knuth's Literate Programming took the very opinionated stance of treating documentation of internals as the primary product, which is a great fit for certain types of problems. PAX is much more mellow. It does not require a complete overhaul of the development process or tooling; giving up interactive development would be too high a price. PAX is chiefly about reducing the distance between code and its documentation, so that they can be changed together. By doing so, it reduces the maintenance cost, improves both the design and the documentation, while making the code more comprehensible.
In summary,
Multiple, redundant facets are needed to have confidence in a solution.
Maintaining them has a cost.
This cost shapes the solution.
There is no universally good set of facets.
There need not be a primary facet to drive development.
We mentally switch between facets frequently.
Our tools should make working with multiple facets easier.
And that's the best 4KiB name I could come up with.
Marco Antoniotti — Documenting/debugging HEΛP
@2023-08-16 12:36 · 104 days agoHello
just a quick summer update about some documentation cleanup and some checks on debugging HEΛP.
Have a look at the (small) changes and keep sending feedback.
Cheers
Gábor Melis — Try in Emacs
@2023-08-14 00:00 · 106 days agoTry, my test anti-framework, has just got light Emacs integration. Consider the following test:
(deftest test-foo ()
(is (equal "xxx" 5))
(is (equal 7 7))
(with-failure-expected (t)
(is (same-set-p '(1) '(2)))))
The test can be run from Lisp with (test-foo)
(interactive
debugging) or (try 'test-foo)
(non-interactive), but now there is
a third option: run it from Emacs and get a couple of conveniences
in return. In particular, with M-x mgl-try
then entering
test-foo
, a new buffer pops up with the test output, which is
font-locked based on the type of the outcome. The buffer also has
outline minor mode, which matches the hierarchical structure of the
output.

M-.
and all the usual key bindings work. In additition,
a couple of keys bound to navigation commands are available. See the
documentation
for the details. Note that Quicklisp has an older version of Try
that does not have Emacs integration, so you'll need to use
https://github.com/melisgl/try
until the next Quicklisp release.
Joe Marshall — Off-sides Penalty
@2023-08-05 18:46 · 115 days agoMany years ago I was under the delusion that if Lisp were more “normal looking” it would be adopted more readily. I thought that maybe inferring the block structure from the indentation (the “off-sides rule”) would make Lisp easier to read. It does, sort of. It seems to make smaller functions easier to read, but it seems to make it harder to read large functions — it's too easy to forget how far you are indented if there is a lot of vertical distance.
I was feeling pretty good about this idea until I tried to write a macro. A macro’s implementation function has block structure, but so does the macro’s replacement text. It becomes ambiguous whether the indentation is indicating block boundaries in the macro body or in it’s expansion.
A decent macro needs a templating system. Lisp has backquote (aka quasiquote). But notice that unquoting comes in both a splicing and non-splicing form. A macro that used the off-sides rule would need templating that also had indenting and non-indenting unquoting forms. Trying to figure out the right combination of unquoting would be a nightmare.
The off-sides rule doesn’t work for macros that have non-standard
indentation. Consider if you wanted to write a macro similar
to unwind-protect
or try…finally
.
Or if you want to have a macro that expands into just
the finally
clause.
It became clear to me that there were going to be no simple rules. It would be hard to design, hard to understand, and hard to use. Even if you find parenthesis annoying, they are relatively simple to understand and simple to use, even in complicated situations. This isn’t to say that you couldn’t cobble together a macro system that used the off-sides rule, it would just be much more complicated and klunkier than Lisp’s.
Joe Marshall — The Garden Path
@2023-07-26 18:13 · 125 days agoFollow me along this garden path (based on true events).
We have a nifty program and we want it to be flexible, so it has a config file. We make up some sort of syntax that indicates key/value pairs. Maybe we’re hipsters and use YAML. Life is good.
But we find that we to configure something dynamically, say based on the value of an environment variable. So we add some escape syntax to the config file to indicate that a value is a variable rather than a literal. But sometimes the string needs a little work done to it, so we add some string manipulation features to the escape syntax.
And when we deploy the program, we find that we’ve want to conditionalize part of the configuration based on the deployment, so we add a conditional syntax to our config language. But conditionals are predicated on boolean values, so we add booleans to our config syntax. Or maybe we make strings do double duty. Of course we need the basic boolean operators, too.
But there’s a lot of duplication across our configurations, so we add the ability to indirectly refer to other config files. That helps to some extent, but there’s a lot of stuff that is almost duplicated, except for a little variation. So we add a way to make a configuration template. Templating needs variables and quoting, so we invent a syntax for those as well.
We’re building a computer language by accident, and without a clear plan it is going to go poorly. Are there data types (aside from strings)? Is there a coherent type system? Are the variables lexically scoped? Is it call-by-name or call-by-value? Is it recursive? Does it have first class (or even second class) procedures? Did we get nested escaping right? How about quoted nested escaping? And good grief our config language is in YAML!
If we had some forethought, we would have realized that we were designing a language and we would have put the effort into making it a good one. If we’re lazy, we’d just pick an existing good language. Like Lisp.
Gábor Melis — DRef and PAX v0.3
@2023-07-26 00:00 · 125 days agoDEFSECTION
needs to refer to definitions that do not create a
first-class object (e.g. stuff like (*DOCUMENT-LINK-TO-HYPERSPEC*
VARIABLE)
), and since its original release in 2014, a substantial
part of
PAX dealt
with locatives and references, which reify definitions. This release
finally factors that code out into a library called
DRef,
allowing PAX to focus on documentation. Being very young, DRef lives
under adult supervision, in a
subdirectory
of the PAX repository.
DREF> (definitions 'pax:document-object*)
(#<DREF DOCUMENT-OBJECT* GENERIC-FUNCTION>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX-BLOG::CATEGORY T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (UNKNOWN-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX::CLHS-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX::INCLUDE-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (MGL-PAX::GO-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (GLOSSARY-TERM T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (SECTION T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (ASDF-SYSTEM-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (CLASS-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (STRUCTURE-ACCESSOR-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (WRITER-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (READER-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (ACCESSOR-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (METHOD-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (SETF-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (VARIABLE-DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (DREF T))>
#<DREF DOCUMENT-OBJECT* (METHOD NIL (T T))>)
DREF> (dref 'pax:document-object* '(method nil (class-dref t)))
#<DREF DOCUMENT-OBJECT* (METHOD NIL (CLASS-DREF T))>
DREF> (arglist *)
(DREF STREAM)
:ORDINARY
DREF> (docstring **)
"For definitions with a CLASS locative, the arglist printed is the
list of immediate superclasses with STANDARD-OBJECT, CONDITION and
non-exported symbols omitted."
DREF> (pax:document ***)
- [method] DOCUMENT-OBJECT* (DREF CLASS-DREF) STREAM
For definitions with a CLASS locative, the arglist printed is the
list of immediate superclasses with STANDARD-OBJECT, CONDITION and
non-exported symbols omitted.
During the refactoring, the references API was cleaned up. How to write extensions has seen lots of changes (see Extending DRef and Extending PAX), but normal use is the same. DRef is similar to Shinmera's Definitions library but is more tailored to the needs of PAX.
Also in this release:
Apropos got a detailed view feature, which includes the docstrings of all listed definitions not just the reference itself. This is very useful for getting an overview of a package.
The detailed view often has to render docstrings which have not been written with PAX in mind and are not proper markdown. These docstrings are now sanitized aggressively in a unavoidably heuristic manner.
There are now two supported CSS styles for HTML output:
:DEFAULT
withsans-serif
, and:CHARTER
with Charter as the main font (which is bundled). The:CHARTER
style is used in the linked PAX World documentation on this blog. See PAX:*BROWSE-HTML-STYLE* and PAX:UPDATE-ASDF-SYSTEM-HTML-DOCS.As usual, quite a few bug fixes and some optimizations also found their way into this release.
Joe Marshall — Playing with AI
@2023-07-08 17:21 · 143 days agoWrite a polemic about how omitting conditional expressions from a language is lazy and stupid.
Conditional expressions are a fundamental feature of any programming language that claims to be expressive, concise and elegant. They allow programmers to write complex logic in a single line, without the need for verbose and repetitive if-else statements. They also enable functional programming paradigms, such as pattern matching and recursion, that are essential for solving many problems efficiently and elegantly.
Omitting conditional expressions from a language is a sign of laziness and stupidity on the part of the language designers. It shows that they do not care about the needs and preferences of their users, who have to write more code, introduce more bugs, and suffer more frustration. It also shows that they do not understand the principles and benefits of functional programming, which have been proven by decades of research and practice. They are stuck in an imperative mindset, where everything has to be done with side effects and mutations.
A language without conditional expressions is a crippled language, that forces programmers to use hacks and workarounds to achieve what should be simple and natural. It is a language that discourages creativity, elegance and readability. It is a language that should be avoided by anyone who values their time, sanity and reputation.
Some examples of programming languages that don't have conditional expressions are:
- CoffeeScript
- Go
- MATLAB
- Pascal
- Rust
This isn't bad, but I wanted more vitriol. I asked and I got a screed that could have come out of comp.lang.functional Unfortunately, it thought better of it and erased its own output before I could snapshot it.
Joe Marshall — Fails Static Type Check, but Runs Anyway
@2023-06-28 19:09 · 153 days agoHere’s a function that fails a static type check, but has no runtime type error:
(defun foo () (sqrt (if (static-type-check? #’foo) "bogus" 2.0))
I suspect most people that favor static types will argue that this sort of program doesn’t count for some reason or other. I think this is more an example (albeit contrived) of the limitations of static type checking.
Joe Marshall — Tail recursion in REBOL
@2023-06-27 22:06 · 153 days agoMany years ago I worked on a language called REBOL. REBOL was notable in that it used a variation of Polish notation. Function names came first, followed by the arguments in left to right order. Parentheses were generally not needed as the subexpression boundaries could be deduced from the arguments. It’s a bit complicated to explain, but pretty easy to code up.
An interpreter environment will be a lists of frames, and each frame is an association list of variable bindings.
(defun lookup (environment symbol) (cond ((consp environment) (let ((probe (assoc symbol (car environment)))) (if probe (cdr probe) (lookup (cdr environment) symbol)))) ((null environment) (error "Unbound variable.")) (t (error "Bogus environment.")))) (defun extend-environment (environment formals values) (cons (map ’list #’cons formals values) environment))
define
mutates the topmost frame of the environment.
(defun environment-define! (environment symbol value) (cond ((consp environment) (let ((probe (assoc symbol (car environment)))) (if probe (setf (cdr probe) value) (setf (car environment) (acons symbol value (car environment)))))) ((null environment) (error "No environment.")) (t (error "Bogus environment."))))
We’ll use Lisp procedures to represent REBOL primitives. The initial environment will have a few built-in primitives:
(defun initial-environment () (extend-environment nil ’(add lessp mult print sub sub1 zerop) (list #’+ #’< #’* #’print #’- #’1- #’zerop)))
A closure is a three-tuple
(defclass closure () ((arguments :initarg :arguments :reader closure-arguments) (body :initarg :body :reader closure-body) (environment :initarg :environment :reader closure-environment)))
An applicable object is either a function or a closure.
(deftype applicable () ’(or closure function))
We need to know how many arguments a function takes. We keep a table of the argument count for the primitives
(defparameter +primitive-arity-table+ (make-hash-table :test #’eq)) (eval-when (:load-toplevel :execute) (setf (gethash #’* +primitive-arity-table+) 2) (setf (gethash #’< +primitive-arity-table+) 2) (setf (gethash #’+ +primitive-arity-table+) 2) (setf (gethash #’- +primitive-arity-table+) 2) (setf (gethash #’1- +primitive-arity-table+) 1) (setf (gethash #’print +primitive-arity-table+) 1) (setf (gethash #’zerop +primitive-arity-table+) 1) ) (defun arity (applicable) (etypecase applicable (closure (length (closure-arguments applicable))) (function (or (gethash applicable +primitive-arity-table+) (error "Unrecognized function.")))))
REBOL-EVAL-ONE
takes a list of REBOL expressions and
returns two values: the value of the leftmost expression in the
list, and the list of remaining expressions.
(defun rebol-eval-one (expr-list environment) (if (null expr-list) (values nil nil) (let ((head (car expr-list))) (etypecase head ((or number string) (values head (cdr expr-list))) (symbol (case head (define (let ((name (cadr expr-list))) (multiple-value-bind (value tail) (rebol-eval-one (cddr expr-list) environment) (environment-define! environment name value) (values name tail)))) (if (multiple-value-bind (pred tail) (rebol-eval-one (cdr expr-list) environment) (values (rebol-eval-sequence (if (null pred) (cadr tail) (car tail)) environment) (cddr tail)))) (lambda (values (make-instance ’closure :arguments (cadr expr-list) :body (caddr expr-list) :environment environment) (cdddr expr-list))) (otherwise (let ((value (lookup environment head))) (if (typep value ’applicable) (rebol-eval-application value (cdr expr-list) environment) (values value (cdr expr-list)))))))))))
If the leftmost symbol evaluates to something applicable, we find out how many arguments are needed, gobble them up, and apply the applicable:
(defun rebol-eval-n (n expr-list environment) (if (zerop n) (values nil expr-list) (multiple-value-bind (value expr-list*) (rebol-eval-one expr-list environment) (multiple-value-bind (values* expr-list**) (rebol-eval-n (1- n) expr-list* environment) (values (cons value values*) expr-list**))))) (defun rebol-eval-application (function expr-list environment) (multiple-value-bind (arglist expr-list*) (rebol-eval-n (arity function) expr-list environment) (values (rebol-apply function arglist) expr-list*))) (defun rebol-apply (applicable arglist) (etypecase applicable (closure (rebol-eval-sequence (closure-body applicable) (extend-environment (closure-environment applicable) (closure-arguments applicable) arglist))) (function (apply applicable arglist))))
Evaluating a sequence is simply calling rebol-eval-one
over and over until you run out of expressions:
(defun rebol-eval-sequence (expr-list environment) (multiple-value-bind (value expr-list*) (rebol-eval-one expr-list environment) (if (null expr-list*) value (rebol-eval-sequence expr-list* environment))))
Let’s try it:
(defun testit () (rebol-eval-sequence ’( define fib lambda (x) (if lessp x 2 (x) (add fib sub1 x fib sub x 2)) define fact lambda (x) (if zerop x (1) (mult x fact sub1 x)) define fact-iter lambda (x answer) (if zerop x (answer) (fact-iter sub1 x mult answer x)) print fib 7 print fact 6 print fact-iter 7 1 ) (initial-environment))) CL-USER> (testit) 13 720 5040
This little interpreter illustrates how basic REBOL evaluation works. But this interpreter doesn’t support iteration. There are no iteration special forms and tail calls are not “safe for space”. Any iteration will run out of stack for a large enough number of repetition.
We have a few options:
- choose a handful of iteration specail forms
like
do
,repeat
,loop
,for
,while
,until
etc. - invent some sort of iterators
- make the interpreter tail recursive (safe-for-space).
To effectively support continuation passing style, you need tail recursion. This alone is a pretty compelling reason to support it.
But it turns out that this is easier said than done. Are you a cruel TA? Give your students this interpreter and ask them to make it tail recursive. The problem is that key recursive calls in the interpreter are not in tail position. These are easy to identify, but you’ll find that fixing them is like flattening a lump in a carpet. You’ll fix tail recursion in one place only to find your solution breaks tail recursion elsewhere.
If our interpreter is written in continuation passing style, it
will be syntactically tail recursive, but it won’t be
“safe for space” unless the appropriate continuations
are η-reduced. If we look at the continuation passing style
version of rebol-eval-sequence
we’ll see a
problem:
(defun rebol-eval-sequence-cps (expr-list environment cont) (rebol-eval-one-cps expr-list environment (lambda (value expr-list*) (if (null expr-list*) (funcall cont value) (rebol-eval-sequence-cps expr-list* environment cont)))))
We cannot η-reduce the continuation. We cannot make this “safe for space”.
But the continuation contains a conditional, and one arm of the
conditional simply invokes the containing continuation, so we can
η-convert this if we unwrap the conditional. We’ll do
this by passing two continuations to rebol-eval-one-cps
as follows
(defun rebol-eval-sequence-cps (expr-list environment cont) (rebol-eval-one-cps expr-list environment ;; first continuation (lambda (value expr-list*) (rebol-eval-sequence-cps expr-list* environment cont)) ;; second continuation, eta converted cont))
rebol-eval-one-cps
will call the first continuation if
there are any remaining expressions, and it will call the second
continuation if it is evaluating the final expression.
This intepreter, with the dual continuations
to rebol-eval-one-cps
, is safe for space, and it will
interpret tail recursive functions without consuming unbounded stack
or heap.
But we still have a bit of an implementation problem. We’re allocating an extra continuation per function call. This doesn’t break tail recursion because we discard one of the continuations almost immediately. Our continuations are not allocated and deallocated in strict stack order anymore. We cannot easily convert ths back into a stack machine implementation.
To solve this problem, I rewrote the interpreter using Henry Baker’s Cheney on the M.T.A technique where the interpreter functions were a set of C functions that tail called each other and never returned. The stack would grow until it overflowed and then we’d garbage collect it and reset it. The return addresses pushed by the C function calls were ignored. Instead, continuation structs were stack allocated. These contained function pointers to the continuation. Essentially, we would pass two retun addresses on the stack, each in its own struct. Once the interpreter figured out which continuation to invoke, it would invoke the function pointer in the struct and pass a pointer to the struct as an argument. Thus the continuation struct would act as a closure.
This technique is pretty portable and not too bad to implement, but writing continuation passing style code in portable C is tedious. Even with macros to help, there is a lot of pointer juggling.
One seredipitous advatage of an implementation like this is that first-class continuations are essentially free. Now I’m not wedded to the idea of first-class continuations, but they make it much easier to implement error handling and advanced flow control, so if you get them for free, in they go.
With it’s Polish notation, tail recursion, and first-class continuations, REBOL was described as an unholy cross between TCL and Scheme. “The result of Outsterhout and Sussman meeting in a dark alley.”
Current versions of REBOL use a simplified interpreter that does not support tail recursion or first-class continuations.
ABCL Dev — A Midsummer's Eve with ABCL 1.9.2
@2023-06-21 10:39 · 160 days agoQuicklisp news — June 2023 Quicklisp dist update now available
@2023-06-19 18:13 · 162 days agoNew projects:
- 3d-spaces — A library implementing spatial query structures — zlib
- 40ants-slynk — Utilities to start SLYNK if needed and to track active connections. — Unlicense
- binary-structures — A library for reading, writing, and representing structures from binary representations — zlib
- cl-atelier — An atelier for Lisp developers — MIT License
- cl-bmp — A library for dealing with Windows bitmaps (BMP, DIB, ICO, CUR) — zlib
- cl-def-properties — Common Lisp definitions instropection library — MIT
- cl-fast-ecs — Blazingly fast Entity-Component-System microframework. — MIT
- cl-fbx — Bindings to ufbx, a simple and free FBX model decoding library — zlib
- cl-id3 — A Common Lisp implementation of the ID3 machine learning algorithm by R. Quinlan. — BSD-2-Clause
- cl-jschema — Common Lisp implementation of JSON Schema — MIT
- cl-jsonl — Lazy read JSONL files with each line a separate definition. — MIT
- cl-ktx — An implementation of the Khronos KTX image file format — zlib
- cl-opensearch-query-builder — Common Lisp implementation of a builder for the OpenSearch query DSL — *CL-OPENSEARCH-QUERY-BUILDER-LICENSE*
- cl-opus — Bindings to libopusfile, a simple and free OGG/Opus decoding library — zlib
- cl-slugify — converts a string into a slug representation. — unlicense
- cl-tqdm — Simple And Fast Progress Bar Library for Common Lisp — MIT
- cl-unac — bindings for lib unac(3). — unlicense
- cl-wavefront — A library to parse the Wavefront OBJ file format. — zlib
- cl-webmachine — HTTP Semantic Awareness on top of Hunchentoot — MIT License
- decompress — A defensive and fast Deflate decompressor in pure CL. — MIT
- extensible-optimizing-coerce — `extensible-optimizing-coerce` primarily provides a `extensible-optimizing-coerce:coerce` function intended as an extensible alternative to `cl:coerce`. — MIT
- kdlcl — KDL reader/printer for common lisp — MIT No Attribution
- kdtree-jk — KD-TREE package for searching for nearest neighbors in N points in in M-dimensions in N log(N) time. — MIT
- khazern — A portable and extensible Common Lisp LOOP implementation — BSD
- letv — The LETV Package. Exports two macros, LETV and LETV* that allow to combine standard LET and LET* constucts with MULTIPLE-VALUE-BIND in a possible less verbose way that also requires less indentation. — BSD
- logging — Functions to configure log4cl for different contexts: REPL, Backend, Command Line Application. — Unlicense
- lru-cache — A least-recently-used cache structure — zlib
- memory-regions — Implementation of a memory region abstraction — zlib
- native-lazy-seq — Lazy sequence using user-extensible sequence protocol. — GPLv3.0+
- nytpu.lisp-utils — A collection of miscellaneous standalone utility packages. — MPL-2.0
- openapi-generator — Parse OpenAPI into CLOS object for client generation — AGPLv3-later
- prettier-builtins — A lightweight library to pretty print builtin arrays and hash-tables. — MIT
- prometheus-gc — This is a Prometheus collector for Common Lisp implementation garbage collector. — Unlicense
- punycode — Punycode encoding/decoding — zlib
- quickhull — An implementation of the Quickhull convex hull construction algorithm — zlib
- reblocks — A Common Lisp web framework, successor of the Weblocks. — LLGPL
- reblocks-auth — A system to add an authentication to the Reblocks based web-site. — Unlicense
- reblocks-file-server — A Reblocks extension allowing to create routes for serving static files from disk. — Unlicense
- reblocks-lass — A helper for Reblocks framework to define CSS dependencies in LASS syntax. — Unlicense
- reblocks-navigation-widget — A container widget which switches between children widgets when user changes an url. — Unlicense
- reblocks-parenscript — An utility to define JavaScript dependencies for Weblocks widgets using Parenscript. — Unlicense
- reblocks-prometheus — This is an addon for Reblocks Common Lisp framework which allows to gather metrics in Prometheus format. — Unlicense
- reblocks-typeahead — A Reblocks widget implementing typeahead search. — Unlicense
- reblocks-ui — A set of UI widgets for Reblocks web framework! — BSD
- reblocks-websocket — Reblocks extension allowing to add a bidirectional communication via Websocket between a backend and Reblocks widgets. — Unlicense
- rs-json — Yet another JSON decoder/encoder. — Modified BSD License
- si-kanren — A micro-Kanren implementation in Common Lisp — MIT
- sly-macrostep — Expand CL macros inside source files — GPL 3
- sly-named-readtables — NAMED-READTABLES support for SLY — GPL 3
- statusor — A library for graceful handling of errors in common lisp inspired by absl::StatusOr — BSD
- stopclock — stopclock is a library for measuring time using (stop)clocks — Apache 2.0
- unboxables — A simple wrapper around CFFI to enable contiguously allocated arrays of structures in Common Lisp. — MIT
- vellum-binary — vellum custom binary format. — BSD simplified
Updated projects: 3bmd, 3bz, 3d-matrices, 3d-quaternions, 3d-transforms, 3d-vectors, abstract-arrays, acclimation, adhoc, alexandria, april, arc-compat, architecture.builder-protocol, array-utils, asdf-dependency-graph, aserve, async-process, atomics, bdef, big-string, bordeaux-threads, bp, cari3s, cffi, chanl, chipz, chirp, chlorophyll, ci, cl+ssl, cl-all, cl-apertium-stream-parser, cl-async, cl-bmas, cl-charms, cl-clon, cl-collider, cl-colors2, cl-confidence, cl-cpus, cl-cram, cl-data-structures, cl-dbi, cl-feedparser, cl-form-types, cl-forms, cl-gamepad, cl-gap-buffer, cl-git, cl-glib, cl-gltf, cl-gobject-introspection, cl-gobject-introspection-wrapper, cl-gserver, cl-i18n, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-libuv, cl-locatives, cl-lzlib, cl-markless, cl-mixed, cl-mlep, cl-modio, cl-mpg123, cl-naive-store, cl-openapi-parser, cl-out123, cl-patterns, cl-ppcre, cl-protobufs, cl-rashell, cl-replica, cl-semver, cl-sentry-client, cl-steamworks, cl-stopwatch, cl-str, cl-string-complete, cl-telegram-bot, cl-threadpool, cl-tiled, cl-unix-sockets, cl-utils, cl-veq, cl-webkit, cl-zstd, clack, clad, classimp, clingon, clog, closer-mop, cluffer, clx, cmd, codex, com-on, common-lisp-jupyter, commondoc-markdown, computable-reals, concrete-syntax-tree, consfigurator, cover, croatoan, crypto-shortcuts, css-lite, csv-validator, ctype, data-lens, deeds, definitions-systems, deflate, defrec, dense-arrays, deploy, depot, dexador, djula, dns-client, doc, docs-builder, draw-cons-tree, dynamic-collect, eclector, esrap, extensible-compound-types, factory-alien, file-select, filesystem-utils, fiveam-matchers, float-features, for, fresnel, functional-trees, gendl, geodesic, glsl-toolkit, gtirb-capstone, gtirb-functions, harmony, http2, iclendar, imago, in-nomine, interface, journal, json-lib, jzon, lack, letrec, lichat-tcp-client, lichat-tcp-server, lichat-ws-server, lime, linear-programming, linewise-template, lispcord, literate-lisp, lla, log4cl, log4cl-extras, lquery, maiden, map-set, markup, math, mcclim, messagebox, metabang-bind, mgl, mgl-mat, mgl-pax, micmac, mmap, mnas-graph, mnas-hash-table, mnas-package, mnas-path, mnas-string, modularize, mystic, named-closure, named-readtables, nibbles-streams, nodgui, north, omglib, osc, ospm, overlord, parachute, parameterized-function, pathname-utils, petalisp, plump, polymorphic-functions, ppath, print-licenses, promise, protobuf, py4cl2, py4cl2-cffi, queen.lisp, quick-patch, quri, random-sample, random-state, recur, sc-extensions, scheduler, sel, serapeum, shasht, shop3, simple-config, simple-inferiors, simple-tasks, sly, speechless, spinneret, staple, stepster, stmx, studio-client, stumpwm, swank-client, synonyms, template, ten, tfeb-lisp-hax, tooter, trace-db, trivia, trivial-arguments, trivial-clipboard, trivial-extensible-sequences, trivial-features, trivial-indent, trivial-package-locks, trivial-timeout, trivial-with-current-source-form, trucler, try, typo, uax-9, ubiquitous, ucons, usocket, utm-ups, vellum, vellum-postmodern, verbose, webapi, zacl, zippy, zpb-ttf.
Removed projects: cl-data-frame, cl-facts, cl-lessp, cl-libfarmhash, cl-libhoedown, cl-num-utils, cl-random, cl-rollback, colleen, gfxmath, glsl-metadata, halftone, history-tree, lionchat, monomyth, nclasses, neo4cl, nfiles, nhooks, njson, nkeymaps, nsymbols, numericals, nyxt, osmpbf, plain-odbc, trivial-coerce, trivial-string-template.
I removed Nyxt because it uses its own style of build system (nasdf) that doesn't work very well with Quicklisp. I recommend getting it directly if you want to use it. Other removed projects stopped building and did not respond to bug reports or disappeared from the Internet.
To get this update, use (ql:update-dist "quicklisp"). Enjoy!
Gábor Melis — PAX Live Documentation Browser
@2023-06-10 00:00 · 171 days agoPAX got a live documentation browser to make documentation generation a more interactive experience. A great thing about Lisp development is changing a single function and quickly seeing how it behaves without the delay of a full recompile. Previously, editing a docstring required regenerating the full documentation to see how the changes turned out. The live documentation browser does away with this step, which tightens the edit/document loop.
PAX also got an apropos browser. It could always generate documentation for stuff not written with PAX in mind, so with the live browser already implemented, this was a just a small add-on.
The trouble with interactivity is, of course, that it's difficult to get the point across in text, so I made two short videos that demonstrate the basics.
Thomas Fitzsimmons — ulisp-repl
@2023-06-09 16:06 · 172 days agoRead-Evaluate-Print Loops are great for doing quick experiments. I recently released two new REPL packages for Emacs to GNU ELPA. This is the second in a two part series. Here is part 1.
For microcontroller projects, uLisp is a great option. It provides a Lisp REPL on top of the Arduino libraries. It implements a subset of Common Lisp and adds microprocessor-specific functions.
I previously built and blogged about a handheld computer designed by uLisp’s creator. I also ported uLisp to the SMART Response XE.
uLisp is controlled by a serial port. People on the uLisp forum have posted various ways to do this, including some Emacs methods. They required external software though, and I wanted something that would run in Emacs with no external dependencies. Emacs has make-serial-process
and serial-term
built-in, so I wondered if I could make a REPL using those. The result is ulisp-repl
which I published to GNU ELPA. Here is an asciinema screencast of installing and using it. You can pause the video and copy text out of it to try in your Emacs session.
ulisp-repl-1.cast
and play it with the asciinema
command line player.It has syntax highlighting on the current line. It might be cool to also implement a SLIME server in Emacs itself (and have SLIME connect to the current Emacs process instead of an external one) but uLisp programs are usually small, so it’s easy enough to copy-n-paste Lisp snippets into the REPL.
Joe Marshall — Lisp Essential, But Not Required
@2023-06-08 19:42 · 173 days agoHere’s a weird little success story involving Lisp. The code doesn’t rely on anything specific to Lisp. It could be rewritten in any language. Yet it wouldn’t have been written in the first place if it weren’t for Lisp.
I like to keep a Lisp REPL open in my Emacs for tinkering around with programming ideas. It only takes a moment to hook up a REST API or scrape some subprocess output, so I have a library of primitives that can talk to our internal build tools and other auxiliary tools such as GitHub or CircleCI. This comes in handy for random ad hoc scripting.
I found out that CircleCI is written in Clojure, and if you connect to your local CircleCI server, you can start a REPL and run queries on the internal CircleCI database. Naturally, I hooked up my local REPL to the Clojure REPL so I could send expressions over to be evaluated. We had multiple CircleCI servers running, so I could use my local Lisp to coordinate activity between the several CircleCI REPLs.
Then a need arose to transfer projects from one CircleCI server to another. My library had all the core capabilities, so I soon had a script for transferring projects. But after transferring a project, we had to fix up the branch protection in GitHub. The GitHub primitives came in handy. Of course our internal systems had to be informed that the project moved, but I had scripting primitives for that system as well.
More requirements arose: package the tool into a docker image, deploy it as a microservice, launch it as a kubernetes batch job, etc. At each point, the existing body of code was 90% of the solution, so it only required small changes to the code to handle the new requirements. As of now, the CircleCI migration tool is deployed as a service used by dozens of our engineers.
Now Lisp isn’t directly necessary for this project. It could easily (for some definitions of easy) be rewritten in another language. But the initial idea of connecting to a Clojure REPL from another Lisp is an obvious thing to try out and only takes moments to code up. If I were coding in another language, I could connect to the REPL, but then I’d have to translate between my other language and Lisp. It’s not an obvious thing to try out and would take a long time to code up. So while this project could be written in another language, it never would have been. And Lisp’s flexibility meant that there was never a reason for a rewrite, even as the requirements were changing.
Nicolas Martyanoff — Reduce vs fold in Common Lisp
@2023-06-02 18:00 · 179 days agoIntroduction
If you have already used functional languages, you are probably familiar with
fold, a high order function used to iterate on a collection of values to
combine them and return a result. You may be surprised that Common Lisp does
not have a fold function, but provides REDUCE
which works a bit differently.
Let us see how they differ.
Understanding REDUCE
In its simplest form, REDUCE
accepts a function and a sequence (meaning
either a list or a vector). It then applies the function to successive pairs
of sequence elements.
You can easily check what happens by tracing the function:
CL-USER> (trace +)
CL-USER> (reduce #'+ '(1 2 3 4 5))
0: (+ 1 2)
0: + returned 3
0: (+ 3 3)
0: + returned 6
0: (+ 6 4)
0: + returned 10
0: (+ 10 5)
0: + returned 15
15
In this example, the call to REDUCE
evaluates (+ (+ (+ (+ 1 2) 3) 4) 5)
.
You can reverse the order using the :from-end
keyword argument:
CL-USER> (trace +)
CL-USER> (reduce #'+ '(1 2 3 4 5) :from-end t)
0: (+ 4 5)
0: + returned 9
0: (+ 3 9)
0: + returned 12
0: (+ 2 12)
0: + returned 14
0: (+ 1 14)
0: + returned 15
15
In which case you will evaluate (+ 1 (+ 2 (+ 3 (+ 4 5))))
. The result is of
course the same since the +
function is associative.
You can of course provide an initial value, in which case REDUCE
will behave
as if this value has been present at the beginning (or the end with
:from-end
) of the sequence.
The surprising aspect of REDUCE
is its behaviour when called on a
sequence with less than two elements:
- If the sequence contains a single element:
- if there is no initial value, the function is not called and the element is returned directly;
- if there is one, the function is called on both the initial value and the single element.
- If the sequence is empty:
- if there is no initial value, the function is called without any argument;
- if there is one, the function is not called and the initial value is returned directly.
As a result, any function passed to REDUCE
must be able to handle being
called with zero, one or two arguments. Most examples found on the Internet
use +
or append
, and these functions happen to handle it (e.g. (+)
returns the identity element of the addition, zero). If you write your own
functions, you will have to deal with it using the &OPTIONAL
lambda list
keyword.
This can lead to unexpected behaviours. If you compute the sum of a sequence
of floats using (reduce #'+ floats)
, you may find it logical to obtain a
float. But if FLOATS
is an empty sequence, you will get 0
which is not a
float. Something to keep in mind.
Differences with fold
The fold function is traditionally defined as accepting three arguments: a function, an initial value — or accumulator — and a list. The function is called repeatedly with both the accumulator and a list element, using the value returned by the function as next accumulator.
For example in Erlang:
lists:foldl(fun(X, Sum) -> Sum + X end, 0, [1, 2, 3, 4, 5]).
An interesting consequence is that fold functions are always called with the
same type of arguments (the list value and the accumulator), while REDUCE
functions can be called with zero or two list values. This makes it
harder to write functions when the accumulated value has a different type from
sequence values.
Fold is also simpler than REDUCE
since it does not have any special case,
making it easier to reason about its behaviour.
It would be interesting to know why a function as fundamental as fold was not included in the Common Lisp standard.
Implementing FOLDL
We can of course implement a fold function in Common Lisp. We will concentrate on the most common (and most efficient) left-to-right version. Let us start by a simple implementation for lists:
(defun foldl/list (function value list)
(declare (type (or function symbol) function)
(type list list))
(if list
(foldl/list function (funcall function value (car list)) (cdr list))
value))
As clearly visible, the recursive call to FOLDL/LIST
is in tail position and
SBCL will happily perform tail-call elimination.
For vectors we use an iterative approach:
(defun foldl/vector (function value vector)
(declare (type (or function symbol) function)
(type vector vector))
(do ((i 0 (1+ i))
(accumulator value))
((>= i (length vector))
accumulator)
(setf accumulator (funcall function accumulator (aref vector i)))))
Finally we write the main FOLDL
function which operates on any sequence:
(defun foldl (function value sequence)
(declare (type (or function symbol) function)
(type sequence sequence))
(etypecase sequence
(list (foldl/list function value sequence))
(vector (foldl/vector function value sequence))))
At the point we can already use FOLDL
for various operations. We could of
course improve it with the addition of the usual :START
, :END
and :KEY
keyword arguments for more flexibility.
vindarel — Pretty GUIs now: nodgui comes with a pre-installed nice looking theme
@2023-06-01 17:03 · 180 days agoBeing able to load a custom theme is great, but it would be even better if we didn’t have to manually install one.
Well, recent changes in nodgui from yesterday and today just dramatically improved the GUI situation for Common Lisp[0].
nodgui now ships the yaru theme
@cage commited the Yaru theme from ttkthemes in nodgui’s repository, and we added QoL improvements. To use it, now you can simply do:
(with-nodgui ()
(use-theme "yaru")
...)
or
(with-nodgui (:theme "yaru")
...)
or
(setf nodgui:*default-theme* "yaru")
(with-nodgui ()
...)
Yaru looks like this:
No, it isn’t native, but it doesn’t look like the 50s either.
See my previous post for more themes, screenshots and instructions to load a third-party theme. Forest Light is nice too!
Try the demos
Try the demos with this theme:
(setf nodgui:*default-theme* "yaru")
(nodgui.demo:demo)
;; or
(nodgui.demo:demo :theme "yaru")
;; a precise demo
(nodgui.demo::demo-widget :theme "yaru")
Themes directory
@cage also made it easier to load a theme.
I have added the special variable
*themes-directory*
(default is the directory themes under the directory where the asdf system is) where the library looks for themes.Each theme must be placed in their own directory as a subdirectory of the aforementioned variable, the name of the directory must be the name of the theme; moreover the name of the TCL file that specify the file must be named as the same of the theme with the extension “tcl” appended
For example, the theme “foo” has to be: “foo/foo.tcl”
Provided these conditions are met using a new theme should be as simple as type
(nodgui:use-theme "foo")
, without(nodgui: eval-tcl-file)
.
Otherwise, just clone a theme repository somewhere, and call
(eval-tcl-file "path/to/the/theme.tcl")
(use-theme "theme")
I can very well imagine using small GUI tools built in Tk and this theme. I’ll have to try nogui’s auto-complete widget too. If you do build a little something, please share, it will help and inspire me and the ones after you.
- https://peterlane.netlify.app/ltk-examples/
- @cage announces new releases on Mastodon.
@cage@stereophonic.space
.
[0]: be more grandiose if you can.
For older items, see the Planet Lisp Archives.
Last updated: 2023-11-27 14:04