Just sqr for now, but it shows that it works.
I would like to return Results from add_def but it makes using foldl
slightly tricky, not a lot, just slightly, and my brainpower is low at
the mo'.
I don't like passing the stack through isnt_int but that let's you chain
with andThen.
There's probably a clever or idiomatic way to not do that and couple the
stack to the result without passing it through the type checker function
but I don't know what it is right now, and this works.
It's a simple task, but I'm not up on my CLI tools, so I went with
Python instead of sh. The split command doesn't have '-p' switch on
Ubuntu. (I'm using Ubuntu on this laptop because it can correctly
configure the WiFi and the laptop has no ethernet port.)
I have it broken up into three stages: a parser that reads a string from
stdin and emits (Prolog) AST to stdout; an interpreter of sorts that
reads AST from stdin, evaluates it, and then emits the AST of the stack
on stdout; and a printer that reads AST on stdin and prints Joy-ish code
to stdout.
I say Joy-ish because currently math is not evaluated and results of
math appear as expressions, not values.
This is because GNU Prolog doesn't have unbounded integers (it's numbers
are machine integers) so literals that are larger than the machine word
are converted into atoms! To keep things simple, I made all ints into
atoms, but then you can't evaluate them: '1'+'2' is not '3' (it might be
'12' though.)
So I print them out at expressions:
$ echo "1 2 3 4 [+ sub /] i" | ./joy_to_ast | ./thun | ./printer
(1 div (2-(4+3)))
You could almost feed that to, say, Python to evaluate, eh? Or dc with
proper formatting? (man dc; "Desk Calculator".)
Anyway, it's a start. The Prolog interpreter is more for things like
type checking and inference, optimizing, compiling, etc. Symbolic stuff
that's a PITA to express in other languages. (The old type inference
code in Python was pages long, in Prolog it's just the thun/3 & thun/4
predicates themselves. At least so far. There are things we will want
to do eventually that might be a PITA to express in Prolog, eh?
On a lark I implemented it in recursive style, but I'm not going to keep
it that way. I have to implement next_term() first and then I'll
uncomment i_joy_combinator().
I'm pretty happy with this. It's iterative rather than recursive so you
won't blow out the call stack if you want to parse a million brackets
(intermediate results are stored on (another) little stack.) It scans
the string and builds lists and sublists as it goes, without wasting
cons cells.
So I was memset'ing the hash table and string table /after/ setting up
the left- and right-bracket tokens! So then when I tried to print the
token list and ht_lookup() dutifully set the error code when it couldn't
find the strings in the hash table, the system properly quit printing
and halted. D'oh! That was a subtle one. Obvious in hindsight.
It took all expletive-deleted day but I finally nailed it down. In the
end the last bug was I was decrementing a stack pointer /after/ trying
to load the item at the (empty) top of the stack. Classic.
I still need to make it not re-allocate strings that it has already
interned, but beyond that I think it's fine.
Clunky but now you only have to change the font name four time in one
place rather than N times in N places, eh?
Writing C again for the first time in ages (this and the Joy
interpreter) the using the preprocessor is like stone-age
meta-programming, from the lens of lisp it's like, "you do what to your
source code?".
It's easy enough to substitute a different font in the call to
Imagemagick's `convert` tool, but in the case of pixel fonts, it will
scale them, so you're not getting a proper bitmap of the pixels, you're
getting a kind of screenshot of the pixels.
I want to make a different machinery for bitmapped pixel fonts, and I
want to make a simple DEFINE-based way to pick them without having to
edit your source code, e.g. #define font_data font_PublicPixel_22_data
yeah?
After that, simple affine transforms for fake 3D..
Pick a letter at random.
I wasn't checking that the destination values were not less than zero,
which let you click at the top/left edges of the screen and therefore
the carefree_alpha_blend_blit() would try to write to areas outside the
framebuffer. I started to see "zalgol" pixels in the letters, so I'm
guessing the pixel data is getting stored just above the framebuffer,
which makes sense (because that's the order they appear in the source
code and this is a simple system!) When you click at the top of the
screen it was writing pixels in the font data, eh? Then when you click
elsewhere on the screen you get extra pixels with your letterforms and
it looks like Unicode Lovecraft puns.
ooo that's a finicky parser tho
Early days. It's all good.
(But damn those Rust tracebacks, what a useless pile of text. i know
the problem is in the parser!? why is it showing me all the rust
internal crap and none of the actual stack trace of the ncc code? I
mean, look at this thing:
sforman@bock:~/src/Joypy/implementations/uvm-ncc % setenv RUST_BACKTRACE full
sforman@bock:~/src/Joypy/implementations/uvm-ncc % gmake
cd /home/sforman/src/uvm/ncc ; cargo run /home/sforman/src/Joypy/implementations/uvm-ncc/xerblin.c
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/ncc /home/sforman/src/Joypy/implementations/uvm-ncc/xerblin.c`
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ParseError { msg: "expected identifier", line_no: 139, col_no: 20 }', src/main.rs:98:43
stack backtrace:
0: 0x10d741f - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h919bef3d5abebde9
1: 0x10ee08e - core::fmt::write::h6413343c5226105f
2: 0x10bce85 - std::io::Write::write_fmt::had0ddcb25461208f
3: 0x10d71d5 - std::sys_common::backtrace::print::h5ed9962f90e9b258
4: 0x10c6eff - std::panicking::default_hook::{{closure}}::h3978a8a8f5c1f893
5: 0x10c6b91 - std::panicking::default_hook::h0cdbdd5201407347
6: 0x10c75bb - std::panicking::rust_panic_with_hook::h15bc8b6da20c2af3
7: 0x10d7777 - std::panicking::begin_panic_handler::{{closure}}::h082a693f9436206b
8: 0x10d756c - std::sys_common::backtrace::__rust_end_short_backtrace::h56343aa2331ff455
9: 0x10c7142 - rust_begin_unwind
10: 0x10ed3d3 - core::panicking::panic_fmt::hf18d1d226927e137
11: 0x10efa93 - core::result::unwrap_failed::ha5725a0b4539229c
12: 0x105b505 - core::result::Result<T,E>::unwrap::h0f336a18a308049e
at /wrkdirs/usr/ports/lang/rust/work/rustc-1.66.0-src/library/core/src/result.rs:1113:23
13: 0x1068314 - ncc::main::h189929cbc5450262
at /usr/home/sforman/src/uvm/ncc/src/main.rs:98:20
14: 0x10611cb - core::ops::function::FnOnce::call_once::h8146a3c8fa28ca14
at /wrkdirs/usr/ports/lang/rust/work/rustc-1.66.0-src/library/core/src/ops/function.rs:251:5
15: 0x106e13e - std::sys_common::backtrace::__rust_begin_short_backtrace::h4e9f285841c55b79
at /wrkdirs/usr/ports/lang/rust/work/rustc-1.66.0-src/library/std/src/sys_common/backtrace.rs:121:18
16: 0x1069fa1 - std::rt::lang_start::{{closure}}::h8ca60a785648e691
at /wrkdirs/usr/ports/lang/rust/work/rustc-1.66.0-src/library/std/src/rt.rs:166:18
17: 0x10c1514 - std::rt::lang_start_internal::hadf3843363799440
18: 0x1069f7a - std::rt::lang_start::h3ee6ffb894d9f1d3
at /wrkdirs/usr/ports/lang/rust/work/rustc-1.66.0-src/library/std/src/rt.rs:165:17
19: 0x10684ee - main
20: 0x104f472 - _start
at /usr/src/lib/csu/amd64/crt1.c:76:7
gmake: *** [GNUmakefile:11: /home/sforman/src/uvm/ncc/out.asm] Error 101
There's ONE LINE from the ncc source: uvm/ncc/src/main.rs:98:20 which is a call
to parse_unit, but it's the following unwrap that seems to be causing the error
message?
This is a compromise between updating the screen every frame (which
takes ~60% CPU on my old no-GPU hardware) and repairing damage from e.g.
dragging offscreen and back on, or covering and uncovering the window
with another window.
https://todo.sr.ht/~sforman/thun-der/27
It doesn't let you overwrite definitions that are loaded from defs.txt.
It DOES let you overwrite builtins, but that doesn't matter because they
are handled by joy_eval() before it checks the dictionary, so in
practice the definitions are never evaluated even though they are put
into the dictionary. Whew! It's hacky but it works!