1 Commits

Author SHA1 Message Date
34fd8fa0a0 untagged union types 2025-09-20 12:41:58 +02:00
45 changed files with 17 additions and 36222 deletions

View File

@@ -1,16 +1,3 @@
# Crepuscular
# beginner-friendly-lang
beginner friendly (more than python!) functional programming language
## Bugs
Anything that looks like a bug, is probably a bug; PLEASE report it!!
## Repository structure
- [Language specification](./spec.md)
- [Reference tree-sitter parser](./tree-sitter/)
## Language proposals
Just open a Gitea issue
beginner friendly (more than python!) functional programming language

45
spec.md
View File

@@ -3,9 +3,6 @@
- put in `TODO`s into the specification for things that should be re-visited in the future
TODO: t Await type wtih special unify
# Goals
- functional programming: everything is pure, and only a function
- easy to learn, even as first programming language
@@ -17,7 +14,9 @@ TODO: t Await type wtih special unify
One of:
- `t -> t`: pure function taking `t` as argument, and returning `t`
- `{ field1: xt, field2: yt }`: non-nominal records
- `'Case1 t | 'Case2 t2 | 'Case3 t3`: tagged unions
- `'MyTag t`: tagged type. If `t` is already tagged, the old tag will be overwritten!
The type can be omitted, defaulting to `Unit`: `'Example` is identical to `Example Unit`
- `t | t2 | t3`: un-tagged unions
- `with myGenericName: x`: introduce "generic" type `myGenericName`, which can be used in type `x`
- `&a 'EOL | 'Cons {v: t, next: a}`: [recursive type](##recursive-types)
- `?Identifier`: the partial type `Identifier`
@@ -59,7 +58,7 @@ type Illegal2 = &a {x: a, y: Int8}
The definition of "infinite size" is:
- for records: if _any_ of the field types has "infinite size"
- for unions: if _all_ of the cases has "infinite size"
- for unions: if _all_ of the cases have "infinite size"
- for functions: never
- for recursive types:
- if the type was already visited: yes
@@ -114,9 +113,10 @@ extend Plan with 'WritelnPlan Unit
Check if type is compatible-with / assignable-to type requirements.
- a type is compatible with itself ofcourse
- a tagged union is compatible with anothr,
- a (un-tagged) union is compatible with anothr,
if the other one contains at least all cases from this union,
and those cases are compatible
determined by weather or not thos are compatible
- a `'Tag t` is compatible with `t`, or `'Tag t`, but NOT `'OtherTag t`
- a non-nominal record is compatbile with anohter,
if the other one contains a _subset_ of the fields of this one,
and the field types are comptible.
@@ -132,19 +132,9 @@ TODO: need more details?
This process is performed on the resulting types of two merging branches
Tries, in order:
- if one of the types is a tagged union,
which contains only one case with an identical type,
and the other is not a tagged union,
it will "attach" that case onto the other type, like this:
`'N Num | 'I Int` and `Num` -> `'N Num | 'I Int`
- if one of the types is a tagged union with exactly one case,
and the other one is not an union,
it will put that tag onto it, and phi-merge the inner types.
example: `'N Num` and `Num` -> `'N Num`
- if both types are tagged unions,
the result is an tagged union contain the csaes of both tagged unions.
union cases that are in both branches, will have their types phi-unified too.
example: `'Ok ('N Num | 'I Int) | 'SomeErr` and `'Ok Num | 'AnotherErr` -> `'Ok ('N Num) | 'SomeErr | 'AnotherErr`
- if one of them is `'Tag a`, and the other is `b` (tagless!), will unify to `'Tag PHI(a,b)`
- if one of them is a union, and the other one is a `t`,
it will first try for each case to unify `t` with the case, and otherwise create a new case of `t`
- if both types are non-nominal records,
the result will contain only fields that are present in both types.
The types of the fields will get phi-unified too
@@ -358,19 +348,14 @@ def add = a -> b -> a + b
- `func(arg1, arg2)`: function application. requires at least one argument. partial function applications are allowed too
- `func(_, arg)`: bind some args from function, return funciton taking in the args with `_`, or the non-specified ones
- `expr :: type`: down-cast type
- `recExpr and otherRecExpr`:
if both types are record types, or tagged record types:
- "sum" fields together of both record expressions.
type checking: phi-unify `recExpr` with `otherRecExpr`, and require that both are non-nominal record types
- else: identical to `\`a and b\`(a, b)`
- `recExpr with fieldname: newFieldValue`: overwrites or adds a field to a record type.
type checking: identical to `recExpr and {fieldname: newFieldValue}`
- `recExpr and otherRecExpr`: "sum" fields together of both record expressions.
type checking: phi-unify `recExpr` with `otherRecExpr`, and require that both are non-nominal record types
- `if cond then a else b`
- `{field1: val1, field2: val2}` field construction
- `match expr with <match cases>`: [pattern matching](##pattern-matching)
- `'Tag expr`
- `a = b` the [equality operator](##equality-operator)
- `-a`: identical to `\`-a\`(a)` name: "negate"
- `a or b`: identical to `\`a or b\`(a, b)` name: "or"
- `a ^ b`: identical to `\`a^b\`(a, b)` name: "exponent"
- `a + b`: identical to `\`a+b\`(a, b)` name: "sum"
- `a - b`: identical to `\`a-b\`(a, b)` name: "difference"
- `a * b`: identical to `\`a*b\`(a, b)` name: "times"
@@ -661,4 +646,4 @@ def [a] RUNTIME_EVAL(io: IO[a]) -> a {
def RUNTIME_ENTRY() {
RUNTIME_EVAL ( main() )
}
```
```

View File

@@ -1,46 +0,0 @@
root = true
[*]
charset = utf-8
[*.{json,toml,yml,gyp}]
indent_style = space
indent_size = 2
[*.js]
indent_style = space
indent_size = 2
[*.scm]
indent_style = space
indent_size = 2
[*.{c,cc,h}]
indent_style = space
indent_size = 4
[*.rs]
indent_style = space
indent_size = 4
[*.{py,pyi}]
indent_style = space
indent_size = 4
[*.swift]
indent_style = space
indent_size = 4
[*.go]
indent_style = tab
indent_size = 8
[Makefile]
indent_style = tab
indent_size = 8
[parser.c]
indent_size = 2
[{alloc,array,parser}.h]
indent_size = 2

View File

@@ -1,42 +0,0 @@
* text=auto eol=lf
# Generated source files
src/*.json linguist-generated
src/parser.c linguist-generated
src/tree_sitter/* linguist-generated
# C bindings
bindings/c/** linguist-generated
CMakeLists.txt linguist-generated
Makefile linguist-generated
# Rust bindings
bindings/rust/* linguist-generated
Cargo.toml linguist-generated
Cargo.lock linguist-generated
# Node.js bindings
bindings/node/* linguist-generated
binding.gyp linguist-generated
package.json linguist-generated
package-lock.json linguist-generated
# Python bindings
bindings/python/** linguist-generated
setup.py linguist-generated
pyproject.toml linguist-generated
# Go bindings
bindings/go/* linguist-generated
go.mod linguist-generated
go.sum linguist-generated
# Swift bindings
bindings/swift/** linguist-generated
Package.swift linguist-generated
Package.resolved linguist-generated
# Zig bindings
bindings/zig/* linguist-generated
build.zig linguist-generated
build.zig.zon linguist-generated

View File

@@ -1,50 +0,0 @@
# Rust artifacts
target/
Cargo.lock
# Node artifacts
build/
prebuilds/
node_modules/
package-lock.json
# Swift artifacts
.build/
Package.resolved
# Go artifacts
_obj/
# Python artifacts
.venv/
dist/
*.egg-info
*.whl
# C artifacts
*.a
*.so
*.so.*
*.dylib
*.dll
*.pc
*.exp
*.lib
# Zig artifacts
.zig-cache/
zig-cache/
zig-out/
# Example dirs
/examples/*/
# Grammar volatiles
*.wasm
*.obj
*.o
# Archives
*.tar.gz
*.tgz
*.zip

View File

@@ -1,66 +0,0 @@
cmake_minimum_required(VERSION 3.13)
project(tree-sitter-crepuscular
VERSION "0.1.0"
DESCRIPTION "Parser for the Crepuscular(ray) functional programming language"
HOMEPAGE_URL "https://gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang"
LANGUAGES C)
option(BUILD_SHARED_LIBS "Build using shared libraries" ON)
option(TREE_SITTER_REUSE_ALLOCATOR "Reuse the library allocator" OFF)
set(TREE_SITTER_ABI_VERSION 15 CACHE STRING "Tree-sitter ABI version")
if(NOT ${TREE_SITTER_ABI_VERSION} MATCHES "^[0-9]+$")
unset(TREE_SITTER_ABI_VERSION CACHE)
message(FATAL_ERROR "TREE_SITTER_ABI_VERSION must be an integer")
endif()
include(GNUInstallDirs)
find_program(TREE_SITTER_CLI tree-sitter DOC "Tree-sitter CLI")
add_custom_command(OUTPUT "${CMAKE_CURRENT_SOURCE_DIR}/src/parser.c"
DEPENDS "${CMAKE_CURRENT_SOURCE_DIR}/src/grammar.json"
COMMAND "${TREE_SITTER_CLI}" generate src/grammar.json
--abi=${TREE_SITTER_ABI_VERSION}
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "Generating parser.c")
add_library(tree-sitter-crepuscular src/parser.c)
if(EXISTS ${CMAKE_CURRENT_SOURCE_DIR}/src/scanner.c)
target_sources(tree-sitter-crepuscular PRIVATE src/scanner.c)
endif()
target_include_directories(tree-sitter-crepuscular
PRIVATE src
INTERFACE $<BUILD_INTERFACE:${CMAKE_CURRENT_SOURCE_DIR}/bindings/c>
$<INSTALL_INTERFACE:${CMAKE_INSTALL_INCLUDEDIR}>)
target_compile_definitions(tree-sitter-crepuscular PRIVATE
$<$<BOOL:${TREE_SITTER_REUSE_ALLOCATOR}>:TREE_SITTER_REUSE_ALLOCATOR>
$<$<CONFIG:Debug>:TREE_SITTER_DEBUG>)
set_target_properties(tree-sitter-crepuscular
PROPERTIES
C_STANDARD 11
POSITION_INDEPENDENT_CODE ON
SOVERSION "${TREE_SITTER_ABI_VERSION}.${PROJECT_VERSION_MAJOR}"
DEFINE_SYMBOL "")
configure_file(bindings/c/tree-sitter-crepuscular.pc.in
"${CMAKE_CURRENT_BINARY_DIR}/tree-sitter-crepuscular.pc" @ONLY)
install(DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}/bindings/c/tree_sitter"
DESTINATION "${CMAKE_INSTALL_INCLUDEDIR}"
FILES_MATCHING PATTERN "*.h")
install(FILES "${CMAKE_CURRENT_BINARY_DIR}/tree-sitter-crepuscular.pc"
DESTINATION "${CMAKE_INSTALL_LIBDIR}/pkgconfig")
install(TARGETS tree-sitter-crepuscular
LIBRARY DESTINATION "${CMAKE_INSTALL_LIBDIR}")
file(GLOB QUERIES queries/*.scm)
install(FILES ${QUERIES}
DESTINATION "${CMAKE_INSTALL_DATADIR}/tree-sitter/queries/crepuscular")
add_custom_target(ts-test "${TREE_SITTER_CLI}" test
WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
COMMENT "tree-sitter test")

View File

@@ -1,34 +0,0 @@
[package]
name = "tree-sitter-crepuscular"
description = "Parser for the Crepuscular(ray) functional programming language"
version = "0.1.0"
authors = ["Alexander Nutz <alexander.nutz@vxcc.dev>"]
license = "MIT"
readme = "README.md"
keywords = ["incremental", "parsing", "tree-sitter", "crepuscular"]
categories = ["parser-implementations", "parsing", "text-editors"]
repository = "https://gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang"
edition = "2021"
autoexamples = false
build = "bindings/rust/build.rs"
include = [
"bindings/rust/*",
"grammar.js",
"queries/*",
"src/*",
"tree-sitter.json",
"LICENSE",
]
[lib]
path = "bindings/rust/lib.rs"
[dependencies]
tree-sitter-language = "0.1"
[build-dependencies]
cc = "1.2"
[dev-dependencies]
tree-sitter = "0.25.8"

View File

@@ -1,99 +0,0 @@
ifeq ($(OS),Windows_NT)
$(error Windows is not supported)
endif
LANGUAGE_NAME := tree-sitter-crepuscular
HOMEPAGE_URL := https://gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang
VERSION := 0.1.0
# repository
SRC_DIR := src
TS ?= tree-sitter
# install directory layout
PREFIX ?= /usr/local
DATADIR ?= $(PREFIX)/share
INCLUDEDIR ?= $(PREFIX)/include
LIBDIR ?= $(PREFIX)/lib
PCLIBDIR ?= $(LIBDIR)/pkgconfig
# source/object files
PARSER := $(SRC_DIR)/parser.c
EXTRAS := $(filter-out $(PARSER),$(wildcard $(SRC_DIR)/*.c))
OBJS := $(patsubst %.c,%.o,$(PARSER) $(EXTRAS))
# flags
ARFLAGS ?= rcs
override CFLAGS += -I$(SRC_DIR) -std=c11 -fPIC
# ABI versioning
SONAME_MAJOR = $(shell sed -n 's/\#define LANGUAGE_VERSION //p' $(PARSER))
SONAME_MINOR = $(word 1,$(subst ., ,$(VERSION)))
# OS-specific bits
ifeq ($(shell uname),Darwin)
SOEXT = dylib
SOEXTVER_MAJOR = $(SONAME_MAJOR).$(SOEXT)
SOEXTVER = $(SONAME_MAJOR).$(SONAME_MINOR).$(SOEXT)
LINKSHARED = -dynamiclib -Wl,-install_name,$(LIBDIR)/lib$(LANGUAGE_NAME).$(SOEXTVER),-rpath,@executable_path/../Frameworks
else
SOEXT = so
SOEXTVER_MAJOR = $(SOEXT).$(SONAME_MAJOR)
SOEXTVER = $(SOEXT).$(SONAME_MAJOR).$(SONAME_MINOR)
LINKSHARED = -shared -Wl,-soname,lib$(LANGUAGE_NAME).$(SOEXTVER)
endif
ifneq ($(filter $(shell uname),FreeBSD NetBSD DragonFly),)
PCLIBDIR := $(PREFIX)/libdata/pkgconfig
endif
all: lib$(LANGUAGE_NAME).a lib$(LANGUAGE_NAME).$(SOEXT) $(LANGUAGE_NAME).pc
lib$(LANGUAGE_NAME).a: $(OBJS)
$(AR) $(ARFLAGS) $@ $^
lib$(LANGUAGE_NAME).$(SOEXT): $(OBJS)
$(CC) $(LDFLAGS) $(LINKSHARED) $^ $(LDLIBS) -o $@
ifneq ($(STRIP),)
$(STRIP) $@
endif
$(LANGUAGE_NAME).pc: bindings/c/$(LANGUAGE_NAME).pc.in
sed -e 's|@PROJECT_VERSION@|$(VERSION)|' \
-e 's|@CMAKE_INSTALL_LIBDIR@|$(LIBDIR:$(PREFIX)/%=%)|' \
-e 's|@CMAKE_INSTALL_INCLUDEDIR@|$(INCLUDEDIR:$(PREFIX)/%=%)|' \
-e 's|@PROJECT_DESCRIPTION@|$(DESCRIPTION)|' \
-e 's|@PROJECT_HOMEPAGE_URL@|$(HOMEPAGE_URL)|' \
-e 's|@CMAKE_INSTALL_PREFIX@|$(PREFIX)|' $< > $@
$(PARSER): $(SRC_DIR)/grammar.json
$(TS) generate $^
install: all
install -d '$(DESTDIR)$(DATADIR)'/tree-sitter/queries/crepuscular '$(DESTDIR)$(INCLUDEDIR)'/tree_sitter '$(DESTDIR)$(PCLIBDIR)' '$(DESTDIR)$(LIBDIR)'
install -m644 bindings/c/tree_sitter/$(LANGUAGE_NAME).h '$(DESTDIR)$(INCLUDEDIR)'/tree_sitter/$(LANGUAGE_NAME).h
install -m644 $(LANGUAGE_NAME).pc '$(DESTDIR)$(PCLIBDIR)'/$(LANGUAGE_NAME).pc
install -m644 lib$(LANGUAGE_NAME).a '$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).a
install -m755 lib$(LANGUAGE_NAME).$(SOEXT) '$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).$(SOEXTVER)
ln -sf lib$(LANGUAGE_NAME).$(SOEXTVER) '$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).$(SOEXTVER_MAJOR)
ln -sf lib$(LANGUAGE_NAME).$(SOEXTVER_MAJOR) '$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).$(SOEXT)
ifneq ($(wildcard queries/*.scm),)
install -m644 queries/*.scm '$(DESTDIR)$(DATADIR)'/tree-sitter/queries/crepuscular
endif
uninstall:
$(RM) '$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).a \
'$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).$(SOEXTVER) \
'$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).$(SOEXTVER_MAJOR) \
'$(DESTDIR)$(LIBDIR)'/lib$(LANGUAGE_NAME).$(SOEXT) \
'$(DESTDIR)$(INCLUDEDIR)'/tree_sitter/$(LANGUAGE_NAME).h \
'$(DESTDIR)$(PCLIBDIR)'/$(LANGUAGE_NAME).pc
$(RM) -r '$(DESTDIR)$(DATADIR)'/tree-sitter/queries/crepuscular
clean:
$(RM) $(OBJS) $(LANGUAGE_NAME).pc lib$(LANGUAGE_NAME).a lib$(LANGUAGE_NAME).$(SOEXT)
test:
$(TS) test
.PHONY: all install uninstall clean test

View File

@@ -1,41 +0,0 @@
// swift-tools-version:5.3
import Foundation
import PackageDescription
var sources = ["src/parser.c"]
if FileManager.default.fileExists(atPath: "src/scanner.c") {
sources.append("src/scanner.c")
}
let package = Package(
name: "TreeSitterCrepuscular",
products: [
.library(name: "TreeSitterCrepuscular", targets: ["TreeSitterCrepuscular"]),
],
dependencies: [
.package(name: "SwiftTreeSitter", url: "https://github.com/tree-sitter/swift-tree-sitter", from: "0.9.0"),
],
targets: [
.target(
name: "TreeSitterCrepuscular",
dependencies: [],
path: ".",
sources: sources,
resources: [
.copy("queries")
],
publicHeadersPath: "bindings/swift",
cSettings: [.headerSearchPath("src")]
),
.testTarget(
name: "TreeSitterCrepuscularTests",
dependencies: [
"SwiftTreeSitter",
"TreeSitterCrepuscular",
],
path: "bindings/swift/TreeSitterCrepuscularTests"
)
],
cLanguageStandard: .c11
)

View File

@@ -1,35 +0,0 @@
{
"targets": [
{
"target_name": "tree_sitter_crepuscular_binding",
"dependencies": [
"<!(node -p \"require('node-addon-api').targets\"):node_addon_api_except",
],
"include_dirs": [
"src",
],
"sources": [
"bindings/node/binding.cc",
"src/parser.c",
],
"variables": {
"has_scanner": "<!(node -p \"fs.existsSync('src/scanner.c')\")"
},
"conditions": [
["has_scanner=='true'", {
"sources+": ["src/scanner.c"],
}],
["OS!='win'", {
"cflags_c": [
"-std=c11",
],
}, { # OS == "win"
"cflags_c": [
"/std:c11",
"/utf-8",
],
}],
],
}
]
}

View File

@@ -1,10 +0,0 @@
prefix=@CMAKE_INSTALL_PREFIX@
libdir=${prefix}/@CMAKE_INSTALL_LIBDIR@
includedir=${prefix}/@CMAKE_INSTALL_INCLUDEDIR@
Name: tree-sitter-crepuscular
Description: @PROJECT_DESCRIPTION@
URL: @PROJECT_HOMEPAGE_URL@
Version: @PROJECT_VERSION@
Libs: -L${libdir} -ltree-sitter-crepuscular
Cflags: -I${includedir}

View File

@@ -1,16 +0,0 @@
#ifndef TREE_SITTER_CREPUSCULAR_H_
#define TREE_SITTER_CREPUSCULAR_H_
typedef struct TSLanguage TSLanguage;
#ifdef __cplusplus
extern "C" {
#endif
const TSLanguage *tree_sitter_crepuscular(void);
#ifdef __cplusplus
}
#endif
#endif // TREE_SITTER_CREPUSCULAR_H_

View File

@@ -1,15 +0,0 @@
package tree_sitter_crepuscular
// #cgo CFLAGS: -std=c11 -fPIC
// #include "../../src/parser.c"
// #if __has_include("../../src/scanner.c")
// #include "../../src/scanner.c"
// #endif
import "C"
import "unsafe"
// Get the tree-sitter Language for this grammar.
func Language() unsafe.Pointer {
return unsafe.Pointer(C.tree_sitter_crepuscular())
}

View File

@@ -1,15 +0,0 @@
package tree_sitter_crepuscular_test
import (
"testing"
tree_sitter "github.com/tree-sitter/go-tree-sitter"
tree_sitter_crepuscular "gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang/bindings/go"
)
func TestCanLoadGrammar(t *testing.T) {
language := tree_sitter.NewLanguage(tree_sitter_crepuscular.Language())
if language == nil {
t.Errorf("Error loading Crepuscular grammar")
}
}

View File

@@ -1,19 +0,0 @@
#include <napi.h>
typedef struct TSLanguage TSLanguage;
extern "C" TSLanguage *tree_sitter_crepuscular();
// "tree-sitter", "language" hashed with BLAKE2
const napi_type_tag LANGUAGE_TYPE_TAG = {
0x8AF2E5212AD58ABF, 0xD5006CAD83ABBA16
};
Napi::Object Init(Napi::Env env, Napi::Object exports) {
auto language = Napi::External<TSLanguage>::New(env, tree_sitter_crepuscular());
language.TypeTag(&LANGUAGE_TYPE_TAG);
exports["language"] = language;
return exports;
}
NODE_API_MODULE(tree_sitter_crepuscular_binding, Init)

View File

@@ -1,9 +0,0 @@
const assert = require("node:assert");
const { test } = require("node:test");
const Parser = require("tree-sitter");
test("can load grammar", () => {
const parser = new Parser();
assert.doesNotThrow(() => parser.setLanguage(require(".")));
});

View File

@@ -1,27 +0,0 @@
type BaseNode = {
type: string;
named: boolean;
};
type ChildNode = {
multiple: boolean;
required: boolean;
types: BaseNode[];
};
type NodeInfo =
| (BaseNode & {
subtypes: BaseNode[];
})
| (BaseNode & {
fields: { [name: string]: ChildNode };
children: ChildNode[];
});
type Language = {
language: unknown;
nodeTypeInfo: NodeInfo[];
};
declare const language: Language;
export = language;

View File

@@ -1,11 +0,0 @@
const root = require("path").join(__dirname, "..", "..");
module.exports =
typeof process.versions.bun === "string"
// Support `bun build --compile` by being statically analyzable enough to find the .node file at build-time
? require(`../../prebuilds/${process.platform}-${process.arch}/tree-sitter-crepuscular.node`)
: require("node-gyp-build")(root);
try {
module.exports.nodeTypeInfo = require("../../src/node-types.json");
} catch (_) {}

View File

@@ -1,12 +0,0 @@
from unittest import TestCase
import tree_sitter
import tree_sitter_crepuscular
class TestLanguage(TestCase):
def test_can_load_grammar(self):
try:
tree_sitter.Language(tree_sitter_crepuscular.language())
except Exception:
self.fail("Error loading Crepuscular grammar")

View File

@@ -1,42 +0,0 @@
"""Parser for the Crepuscular(ray) functional programming language"""
from importlib.resources import files as _files
from ._binding import language
def _get_query(name, file):
query = _files(f"{__package__}.queries") / file
globals()[name] = query.read_text()
return globals()[name]
def __getattr__(name):
# NOTE: uncomment these to include any queries that this grammar contains:
# if name == "HIGHLIGHTS_QUERY":
# return _get_query("HIGHLIGHTS_QUERY", "highlights.scm")
# if name == "INJECTIONS_QUERY":
# return _get_query("INJECTIONS_QUERY", "injections.scm")
# if name == "LOCALS_QUERY":
# return _get_query("LOCALS_QUERY", "locals.scm")
# if name == "TAGS_QUERY":
# return _get_query("TAGS_QUERY", "tags.scm")
raise AttributeError(f"module {__name__!r} has no attribute {name!r}")
__all__ = [
"language",
# "HIGHLIGHTS_QUERY",
# "INJECTIONS_QUERY",
# "LOCALS_QUERY",
# "TAGS_QUERY",
]
def __dir__():
return sorted(__all__ + [
"__all__", "__builtins__", "__cached__", "__doc__", "__file__",
"__loader__", "__name__", "__package__", "__path__", "__spec__",
])

View File

@@ -1,10 +0,0 @@
from typing import Final
# NOTE: uncomment these to include any queries that this grammar contains:
# HIGHLIGHTS_QUERY: Final[str]
# INJECTIONS_QUERY: Final[str]
# LOCALS_QUERY: Final[str]
# TAGS_QUERY: Final[str]
def language() -> object: ...

View File

@@ -1,35 +0,0 @@
#include <Python.h>
typedef struct TSLanguage TSLanguage;
TSLanguage *tree_sitter_crepuscular(void);
static PyObject* _binding_language(PyObject *Py_UNUSED(self), PyObject *Py_UNUSED(args)) {
return PyCapsule_New(tree_sitter_crepuscular(), "tree_sitter.Language", NULL);
}
static struct PyModuleDef_Slot slots[] = {
#ifdef Py_GIL_DISABLED
{Py_mod_gil, Py_MOD_GIL_NOT_USED},
#endif
{0, NULL}
};
static PyMethodDef methods[] = {
{"language", _binding_language, METH_NOARGS,
"Get the tree-sitter language for this grammar."},
{NULL, NULL, 0, NULL}
};
static struct PyModuleDef module = {
.m_base = PyModuleDef_HEAD_INIT,
.m_name = "_binding",
.m_doc = NULL,
.m_size = 0,
.m_methods = methods,
.m_slots = slots,
};
PyMODINIT_FUNC PyInit__binding(void) {
return PyModuleDef_Init(&module);
}

View File

@@ -1,21 +0,0 @@
fn main() {
let src_dir = std::path::Path::new("src");
let mut c_config = cc::Build::new();
c_config.std("c11").include(src_dir);
#[cfg(target_env = "msvc")]
c_config.flag("-utf-8");
let parser_path = src_dir.join("parser.c");
c_config.file(&parser_path);
println!("cargo:rerun-if-changed={}", parser_path.to_str().unwrap());
let scanner_path = src_dir.join("scanner.c");
if scanner_path.exists() {
c_config.file(&scanner_path);
println!("cargo:rerun-if-changed={}", scanner_path.to_str().unwrap());
}
c_config.compile("tree-sitter-crepuscular");
}

View File

@@ -1,51 +0,0 @@
//! This crate provides Crepuscular language support for the [tree-sitter] parsing library.
//!
//! Typically, you will use the [`LANGUAGE`] constant to add this language to a
//! tree-sitter [`Parser`], and then use the parser to parse some code:
//!
//! ```
//! let code = r#"
//! "#;
//! let mut parser = tree_sitter::Parser::new();
//! let language = tree_sitter_crepuscular::LANGUAGE;
//! parser
//! .set_language(&language.into())
//! .expect("Error loading Crepuscular parser");
//! let tree = parser.parse(code, None).unwrap();
//! assert!(!tree.root_node().has_error());
//! ```
//!
//! [`Parser`]: https://docs.rs/tree-sitter/0.25.8/tree_sitter/struct.Parser.html
//! [tree-sitter]: https://tree-sitter.github.io/
use tree_sitter_language::LanguageFn;
extern "C" {
fn tree_sitter_crepuscular() -> *const ();
}
/// The tree-sitter [`LanguageFn`] for this grammar.
pub const LANGUAGE: LanguageFn = unsafe { LanguageFn::from_raw(tree_sitter_crepuscular) };
/// The content of the [`node-types.json`] file for this grammar.
///
/// [`node-types.json`]: https://tree-sitter.github.io/tree-sitter/using-parsers/6-static-node-types
pub const NODE_TYPES: &str = include_str!("../../src/node-types.json");
// NOTE: uncomment these to include any queries that this grammar contains:
// pub const HIGHLIGHTS_QUERY: &str = include_str!("../../queries/highlights.scm");
// pub const INJECTIONS_QUERY: &str = include_str!("../../queries/injections.scm");
// pub const LOCALS_QUERY: &str = include_str!("../../queries/locals.scm");
// pub const TAGS_QUERY: &str = include_str!("../../queries/tags.scm");
#[cfg(test)]
mod tests {
#[test]
fn test_can_load_grammar() {
let mut parser = tree_sitter::Parser::new();
parser
.set_language(&super::LANGUAGE.into())
.expect("Error loading Crepuscular parser");
}
}

View File

@@ -1,16 +0,0 @@
#ifndef TREE_SITTER_CREPUSCULAR_H_
#define TREE_SITTER_CREPUSCULAR_H_
typedef struct TSLanguage TSLanguage;
#ifdef __cplusplus
extern "C" {
#endif
const TSLanguage *tree_sitter_crepuscular(void);
#ifdef __cplusplus
}
#endif
#endif // TREE_SITTER_CREPUSCULAR_H_

View File

@@ -1,12 +0,0 @@
import XCTest
import SwiftTreeSitter
import TreeSitterCrepuscular
final class TreeSitterCrepuscularTests: XCTestCase {
func testCanLoadGrammar() throws {
let parser = Parser()
let language = Language(language: tree_sitter_crepuscular())
XCTAssertNoThrow(try parser.setLanguage(language),
"Error loading Crepuscular grammar")
}
}

View File

@@ -1,8 +0,0 @@
set -e
tree-sitter generate
make
tree-sitter build --wasm
rm -f /usr/lib64/nvim/parser/crepuscular.so
cp libtree-sitter-crepuscular.so /usr/lib64/nvim/parser/crepuscular.so
rm -f ~/.config/nvim/after/queries/crepuscular/*
cp queries/* ~/.config/nvim/after/queries/crepuscular/

View File

@@ -1,59 +0,0 @@
def tricky =
let ab = funct(1)
let cd = 3
ab + cd
## *_this_*
## does *_this_* work?
## video::https://code.org[Cool]
##
## a list:
## * item1
## * item2
##
## === test
## Double TS language injection goes hard
##
## [source,c]
## ----
## int main() {
## return 0;
## }
## ----
type t List = 'EOL | 'Cons {hd: t, tl: t List}
# comment
def xa =
let data = 2
if data then
data and other and more
else 'MyTag myList ++ [4 + 2 * -(4) - 1] = 2^2
## adds two numbers
## together
def add : int -> int -> int =
a -> b : int -> a + b
def lol =
let data = 2
match x with
a | b -> (match c with y -> 3 | z ->5)
| c -> 3
def main =
let x = 2 ( 5 )
[6 :: int,7,8,"helo world"]
def example = a -> b -> 1:x => y
extensible union io.Plan
extend io.Plan with 'WriteFile Unit
type a b = {a:Int,b: (Array Of Integ), c: a3}
type ?lol.FullPartial = {}
type t List = with lol: &t2 with cursed: 'Tag [t,t2] List option | 'Case2 | 'Case3 | ...?rem -> int -> float

View File

@@ -1,5 +0,0 @@
module gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang
go 1.22
require github.com/tree-sitter/go-tree-sitter v0.24.0

View File

@@ -1,406 +0,0 @@
/**
* @file Parser for the Crepuscular(ray) functional programming language
* @author Alexander Nutz <alexander.nutz@vxcc.dev>
* @license MIT
*/
/// <reference types="tree-sitter-cli/dsl" />
// @ts-check
module.exports = grammar({
name: "crepuscular",
reserved: {
toplevel_kw: $ =>
['type', 'with', 'extensible', 'extend', 'union', 'def', 'await', 'let', 'if', 'then', 'else', 'in', 'match'],
},
extras: ($) => [
/\s/, // whitespace
$.comment,
$.section_comment,
],
word: $ => $._identifier_tok,
precedences: _ => [
[
"parentrized",
"function_call",
"expr_atom",
"ident",
"exponent",
"multiplication",
"negate",
"addition",
"concat",
"binary_boolean",
"equal",
"if",
"let",
"match_arm",
"new_match_arm",
"await",
"tag",
],
],
rules: {
source_file: $ => repeat($.definition),
_identifier_tok: $ => token(/[a-zA-Z_]+[a-zA-Z0-9_]*/),
identifier: $ => choice(
reserved('toplevel_kw', $._identifier_tok),
token(/`[^\n\`]+`/)
),
path: $ => prec.left(seq($.identifier, repeat(seq('.', $.identifier)))),
comment: $ =>
token(seq("# ", /.*/)),
section_comment: $ =>
token(seq("###", /.*/)),
doc_comment_value: $ =>
choice(
token.immediate(/\n/),
token.immediate(/[^\n]*\n?/)),
doc_comment: $ =>
repeat1(seq('##', token.immediate(/ */), $.doc_comment_value)),
definition: $ => seq(
optional(field('doc', $.doc_comment)),
field('body', choice(
$.full_partial_type_definition,
$.type_definition,
$.extensible_union,
$.extend_decl,
$.def,
)),
),
extensible_union: $ => seq(
'extensible', 'union', $.path),
extend_decl: $ => seq(
'extend',
field('what', $.path),
'with',
field('tag', $.tag),
field('ty', $.type)),
full_partial_type_definition: $ => seq(
"type",
"?", field('name', $.path),
"=",
field('type', $.type)
),
type_definition: $ => seq(
"type",
optional(choice(
field('arg', $.identifier),
seq(
'[',
repeat(seq(field('arg', $.identifier), ',')),
field('arg', $.identifier),
']'),
)),
field('name', $.path),
"=",
field('type', $.type)
),
type_atom: $ => choice(
$.just_type,
$.partial_type,
seq('(', $.type, ')'),
$.record_type,
),
_type_non_fn: $ => choice(
$.type_atom,
$.tagged_type,
$.union_type,
$.partial_union_type,
$.parametrized_type,
$.with_type,
$.recursive_type,
),
type: $ => choice(
$._type_non_fn,
$.fn_type,
),
union_type: $ => prec.left(1,
seq(
field('left', $.type),
'|',
field('right', $.type))),
partial_union_type: $ => prec.left(1,
seq(
field('left', $.type),
'|', '...',
field('partial', $.partial_type))),
tag: $ => new RustRegex("'(?:[a-zA-Z_][a-zA-Z0-9_]*(?:[.][a-zA-Z_0-9]+)*)"),
tagged_type: $ => prec.right(3,
seq(field('tag', $.tag), optional(
field('type', choice(
$.type_atom,
$.parametrized_type))))),
multi_type_parameters: $ => seq('[',
field('arg', $.type),
repeat(seq(',', field('arg', $.type))),
']'),
parametrized_type: $ => prec.left(4, seq(
field('nest', choice(
$.multi_type_parameters,
$.type_atom,
)),
repeat(field('nest', $.path)),
field('type', $.path),
)),
with_type: $ => seq('with',
field('arg', $.identifier),
repeat(seq(',', field('arg', $.identifier))),
':',
field('type', $.type)),
recursive_type: $ => seq('&',
field('name', $.identifier),
field('type', $.type)),
partial_type: $ => seq('?', $.identifier),
fn_type: $ => prec.left(-10,
seq(field('arg', $.type), '->', field('res', $.type))),
just_type: $ => prec(-1, $.path),
// TODO: doc comments
record_type_field: $ => seq(field('name', $.identifier), ':', field('type', $.type)),
record_type: $ => seq(
'{',
repeat(seq(field('field', $.record_type_field), ',')),
optional(choice(
field('field', $.record_type_field),
seq('...', field('partial', $.partial_type)),
)),
'}'),
escape_sequence: $ =>
token.immediate(
seq('\\', /[tbrnf0"'\\]/)),
char_middle: $ => /./,
string_middle: $ => /[^\\"]+/,
char_literal: $ =>
seq('\'', choice($.escape_sequence, $.char_middle), '\''),
// TODO: fstrings
string_literal: $ =>
seq('"', repeat(choice($.escape_sequence, $.string_middle)), '"'),
num_literal: $ =>
seq(
choice(
/[0-9]+/,
/\-[0-9]+/
),
optional(token.immediate(/[.][0-9]+/))
),
list_expression: $ =>
seq(
'[',
repeat(seq($.expression, ',')),
optional($.expression),
']'),
field_access: $ => prec.left(
seq(field('expr', $.atom), ':', field('field', $.identifier))),
function_call: $ => prec.left("function_call",
seq(
field('fn', $.atom),
'(',
repeat(seq(field('arg', $.expression), ',')),
optional(field('arg', $.expression)),
')')),
ident_expr: $ => prec("ident",
$.path),
record_expr_field: $ =>
seq(field('field', $.identifier), ':', field('value', $.expression)),
record_expr: $ => seq(
'{',
repeat(seq($.record_expr_field, ',')),
optional($.record_expr_field),
'}'),
atom: $ => choice(
prec("parentrized", seq('(', $.expression, ')')),
$.ident_expr,
$.char_literal,
$.string_literal,
$.num_literal,
$.list_expression,
$.field_access,
$.function_call,
$.record_expr,
),
let_binding: $ => prec("let", seq(
'let',
field('name', $.identifier),
'=',
field('value', $.expression),
optional('in'),
field('body', $.expression),
)),
await_binding: $ => prec("let", seq(
'await',
field('name', $.identifier),
'=',
field('value', $.expression),
optional('in'),
field('body', $.expression),
)),
type_downcast: $ => seq(
field('expr', $.atom),
'::',
field('as', $.type),
),
lambda: $ => prec.right(4, seq(
field('arg', $.identifier),
optional(seq(':', field('arg_type', $._type_non_fn))),
'->',
field('body', $.expression)
)),
if_expr: $ => prec("if",
seq(
'if',
field('condition', $.expression),
'then',
field('then', $.expression),
'else',
field('else', $.expression))),
_add_expr: $ => prec.left("addition",
seq(
field('left', $.expression),
choice('+', '-'),
field('right', $.expression)
)),
_multiply_expr: $ => prec.left("multiplication",
seq(
field('left', $.expression),
choice('*', '/'),
field('right', $.expression)
)),
_equal_expr: $ => prec.left("equal",
seq(
field('left', $.expression),
'=',
field('right', $.expression)
)),
_concat_expr: $ => prec.left("concat",
seq(
field('left', $.expression),
choice('++', '=>'),
field('right', $.expression)
)),
_exponent_expr: $ => prec.left("exponent",
seq(
field('left', $.expression),
'^',
field('right', $.atom)
)),
_bin_bool_expr: $ => prec.left("binary_boolean",
seq(
field('left', $.expression),
choice('and', 'or'),
field('right', $.atom)
)),
binary_expr: $ => choice(
$._exponent_expr,
$._concat_expr,
$._equal_expr,
$._multiply_expr,
$._add_expr,
$._bin_bool_expr,
),
match_arm: $ => prec("match_arm",
seq(
field('cases', seq($.atom, repeat(seq('|', $.atom)))),
'->', field('expr', $.atom))),
match_expr: $ =>
seq('match', field('on', $.expression), 'with',
field('arm', $.match_arm),
prec("new_match_arm", repeat(seq('|', field('arm', $.match_arm))))),
unary_expr: $ => prec.right("negate",
seq(
'-',
field('expr', $.expression))),
tag_expr: $ => prec.right("tag",
seq(
field('tag', $.tag),
field('expr', $.expression))),
await_expr: $ => prec.right("await",
seq('await', field('expr', $.expression))),
expression: $ => choice(
prec("expr_atom", $.atom),
$.let_binding,
$.await_binding,
$.await_expr,
$.type_downcast,
$.lambda,
$.if_expr,
$.tag_expr,
$.match_expr,
$.binary_expr,
$.unary_expr,
),
def: $ => seq(
'def',
field('name', $.path),
choice(
seq(':', field('signature', $.type)),
seq(
optional(seq(':', field('signature', $.type))),
seq('=', field('value', $.expression)),
)
),
),
}
});

View File

@@ -1,52 +0,0 @@
{
"name": "tree-sitter-crepuscular",
"version": "0.1.0",
"description": "Parser for the Crepuscular(ray) functional programming language",
"repository": "https://gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang",
"license": "MIT",
"author": {
"name": "Alexander Nutz",
"email": "alexander.nutz@vxcc.dev",
"url": "https://alex.vxcc.dev/"
},
"main": "bindings/node",
"types": "bindings/node",
"keywords": [
"incremental",
"parsing",
"tree-sitter",
"crepuscular"
],
"files": [
"grammar.js",
"tree-sitter.json",
"binding.gyp",
"prebuilds/**",
"bindings/node/*",
"queries/*",
"src/**",
"*.wasm"
],
"dependencies": {
"node-addon-api": "^8.3.1",
"node-gyp-build": "^4.8.4"
},
"devDependencies": {
"prebuildify": "^6.0.1",
"tree-sitter-cli": "^0.25.8"
},
"peerDependencies": {
"tree-sitter": "^0.22.4"
},
"peerDependenciesMeta": {
"tree-sitter": {
"optional": true
}
},
"scripts": {
"install": "node-gyp-build",
"prestart": "tree-sitter build --wasm",
"start": "tree-sitter playground",
"test": "node --test bindings/node/*_test.js"
}
}

View File

@@ -1,29 +0,0 @@
[build-system]
requires = ["setuptools>=62.4.0", "wheel"]
build-backend = "setuptools.build_meta"
[project]
name = "tree-sitter-crepuscular"
description = "Parser for the Crepuscular(ray) functional programming language"
version = "0.1.0"
keywords = ["incremental", "parsing", "tree-sitter", "crepuscular"]
classifiers = [
"Intended Audience :: Developers",
"Topic :: Software Development :: Compilers",
"Topic :: Text Processing :: Linguistic",
"Typing :: Typed",
]
authors = [{ name = "Alexander Nutz", email = "alexander.nutz@vxcc.dev" }]
requires-python = ">=3.10"
license.text = "MIT"
readme = "README.md"
[project.urls]
Homepage = "https://gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang"
[project.optional-dependencies]
core = ["tree-sitter~=0.24"]
[tool.cibuildwheel]
build = "cp310-*"
build-frontend = "build"

View File

@@ -1,58 +0,0 @@
; megic
(identifier) @variable
(tag) @label ; @constructor
[
"(" ")"
"{" "}"
"[" "]"
] @punctuation.bracket
[
"."
","
"|"
"->"
":"
] @punctuation.delimiter
[
(comment)
(doc_comment)
(section_comment)
] @comment
(num_literal) @constant.builtin
[(char_literal)
(string_literal)
] @string
(escape_sequence) @escape
["+" "-"
"*" "/"
"=>" "++"
"::"
] @operator
[
"def"
"type"
"with"
"extensible"
"extend"
"union"
"def"
"await"
"let"
"if"
"then"
"else"
"in"
"match"
"and"
"or"
] @keyword

View File

@@ -1,3 +0,0 @@
((doc_comment_value) @injection.content
(#set! injection.language "asciidoc")
(#set! injection.combined))

View File

@@ -1,159 +0,0 @@
; do not format inside these
[(string_literal)
(char_literal)
(comment)
(doc_comment)
] @leaf
; TODO: light asciidoc formatting
(definition) @allow_blank_line_before
(doc_comment_value) @append_hardline
(doc_comment) @prepend_hardline @append_hardline
(comment) @append_hardline
(let_binding
"let" @prepend_spaced_softline @append_space
"=" @prepend_space @append_space
value: (_) @prepend_indent_start @append_indent_end @prepend_begin_measuring_scope @append_end_measuring_scope
body: (_) @prepend_hardline
(#scope_id! "let"))
(await_binding
"await" @prepend_spaced_softline @append_space
"=" @prepend_space @append_space
value: (_) @prepend_indent_start @append_indent_end @prepend_begin_measuring_scope @append_end_measuring_scope
body: (_) @prepend_hardline
(#scope_id! "await"))
(def
":" @prepend_space @append_spaced_softline
signature: (_) @prepend_spaced_softline @prepend_indent_start @append_indent_end)
(def
"=" @prepend_space @append_spaced_softline
value: (_) @prepend_indent_start @prepend_begin_measuring_scope
@append_indent_end @append_end_measuring_scope
(#scope_id! "def"))
(def "def" @append_space)
(if_expr
then: (_) @prepend_begin_measuring_scope @append_end_measuring_scope
(#scope_id! "if_expr.then"))
(if_expr
else: (_) @prepend_begin_measuring_scope @append_end_measuring_scope
(#scope_id! "if_expr.else"))
(if_expr
"if" @prepend_space @append_space
"then" @prepend_space @append_spaced_softline
then: (_) @prepend_indent_start
@append_indent_end @append_hardline
"else" @prepend_space @append_spaced_softline
else: (_) @prepend_indent_start
@append_indent_end)
((match_arm) @append_hardline . (match_arm))
(match_arm
"->" @append_spaced_softline @prepend_indent_start @append_indent_end)
(match_arm
"|" @prepend_spaced_softline @append_space)
(match_expr
"|" @append_space)
(match_expr
"match" @append_space
"with" @prepend_space @append_hardline
. (match_arm) @prepend_indent_start @append_indent_end)
; add newline between two consecutive definitions
((definition) @append_hardline . (definition))
(list_expression
"," @append_spaced_softline)
(list_expression
"[" @append_begin_measuring_scope @append_empty_softline @append_indent_start
"]" @append_end_measuring_scope @prepend_empty_softline @append_indent_end
(#scope_id! "list_expr"))
; remove trailing comma
(list_expression
"," @delete . "]")
(extensible_union
"extensible" @append_space
"union" @append_space)
(extend_decl
"extend" @append_space
"with" @prepend_spaced_softline @append_space
tag: (_) @append_space)
(atom
"(" @delete
. (expression (atom))
. ")" @delete)
(full_partial_type_definition
"type" @append_space
"=" @prepend_space @append_spaced_softline)
(type_definition
"[" @prepend_space
"]" @append_space)
(type_definition
"," @append_space)
(type_definition
arg: (_) @append_space)
(type_definition
"type" @append_space
"=" @prepend_space @append_spaced_softline)
(function_call
"," @append_spaced_softline)
(record_expr_field
":" @append_spaced_softline)
(record_expr
"," @append_spaced_softline)
(record_expr
"{" @append_empty_softline
"}" @prepend_empty_softline)
; remove trailing comma
(record_expr
"," @delete . "}")
(atom
"(" @append_begin_measuring_scope @append_empty_softline @append_indent_start
")" @append_end_measuring_scope @prepend_empty_softline @append_indent_end
(#scope_id! "paren_expr"))
(await_expr
"await" @append_space)
(type_downcast
"::" @prepend_spaced_softline @append_space)
(lambda
":" @prepend_space @append_space)
(lambda
"->" @prepend_space @append_spaced_softline
body: (_) @prepend_indent_start @append_indent_end)
(type_atom
"(" @append_begin_measuring_scope @append_empty_softline @append_indent_start
")" @append_end_measuring_scope @prepend_empty_softline @append_indent_end
(#scope_id! "paren_type"))
(tag_expr
tag: (_) @append_space)
(binary_expr
left: (_) @append_spaced_softline
right: (_) @prepend_space)
; TODO: types
; TODO: disable-format-regions
; TODO: folding query
; TODO: wrap confusing expressions (in terms of precedence) in parens

View File

@@ -1,77 +0,0 @@
from os import path
from platform import system
from sysconfig import get_config_var
from setuptools import Extension, find_packages, setup
from setuptools.command.build import build
from setuptools.command.egg_info import egg_info
from wheel.bdist_wheel import bdist_wheel
sources = [
"bindings/python/tree_sitter_crepuscular/binding.c",
"src/parser.c",
]
if path.exists("src/scanner.c"):
sources.append("src/scanner.c")
macros: list[tuple[str, str | None]] = [
("PY_SSIZE_T_CLEAN", None),
("TREE_SITTER_HIDE_SYMBOLS", None),
]
if limited_api := not get_config_var("Py_GIL_DISABLED"):
macros.append(("Py_LIMITED_API", "0x030A0000"))
if system() != "Windows":
cflags = ["-std=c11", "-fvisibility=hidden"]
else:
cflags = ["/std:c11", "/utf-8"]
class Build(build):
def run(self):
if path.isdir("queries"):
dest = path.join(self.build_lib, "tree_sitter_crepuscular", "queries")
self.copy_tree("queries", dest)
super().run()
class BdistWheel(bdist_wheel):
def get_tag(self):
python, abi, platform = super().get_tag()
if python.startswith("cp"):
python, abi = "cp310", "abi3"
return python, abi, platform
class EggInfo(egg_info):
def find_sources(self):
super().find_sources()
self.filelist.recursive_include("queries", "*.scm")
self.filelist.include("src/tree_sitter/*.h")
setup(
packages=find_packages("bindings/python"),
package_dir={"": "bindings/python"},
package_data={
"tree_sitter_crepuscular": ["*.pyi", "py.typed"],
"tree_sitter_crepuscular.queries": ["*.scm"],
},
ext_package="tree_sitter_crepuscular",
ext_modules=[
Extension(
name="_binding",
sources=sources,
extra_compile_args=cflags,
define_macros=macros,
include_dirs=["src"],
py_limited_api=limited_api,
)
],
cmdclass={
"build": Build,
"bdist_wheel": BdistWheel,
"egg_info": EggInfo,
},
zip_safe=False
)

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,54 +0,0 @@
#ifndef TREE_SITTER_ALLOC_H_
#define TREE_SITTER_ALLOC_H_
#ifdef __cplusplus
extern "C" {
#endif
#include <stdbool.h>
#include <stdio.h>
#include <stdlib.h>
// Allow clients to override allocation functions
#ifdef TREE_SITTER_REUSE_ALLOCATOR
extern void *(*ts_current_malloc)(size_t size);
extern void *(*ts_current_calloc)(size_t count, size_t size);
extern void *(*ts_current_realloc)(void *ptr, size_t size);
extern void (*ts_current_free)(void *ptr);
#ifndef ts_malloc
#define ts_malloc ts_current_malloc
#endif
#ifndef ts_calloc
#define ts_calloc ts_current_calloc
#endif
#ifndef ts_realloc
#define ts_realloc ts_current_realloc
#endif
#ifndef ts_free
#define ts_free ts_current_free
#endif
#else
#ifndef ts_malloc
#define ts_malloc malloc
#endif
#ifndef ts_calloc
#define ts_calloc calloc
#endif
#ifndef ts_realloc
#define ts_realloc realloc
#endif
#ifndef ts_free
#define ts_free free
#endif
#endif
#ifdef __cplusplus
}
#endif
#endif // TREE_SITTER_ALLOC_H_

View File

@@ -1,291 +0,0 @@
#ifndef TREE_SITTER_ARRAY_H_
#define TREE_SITTER_ARRAY_H_
#ifdef __cplusplus
extern "C" {
#endif
#include "./alloc.h"
#include <assert.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
#include <string.h>
#ifdef _MSC_VER
#pragma warning(push)
#pragma warning(disable : 4101)
#elif defined(__GNUC__) || defined(__clang__)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wunused-variable"
#endif
#define Array(T) \
struct { \
T *contents; \
uint32_t size; \
uint32_t capacity; \
}
/// Initialize an array.
#define array_init(self) \
((self)->size = 0, (self)->capacity = 0, (self)->contents = NULL)
/// Create an empty array.
#define array_new() \
{ NULL, 0, 0 }
/// Get a pointer to the element at a given `index` in the array.
#define array_get(self, _index) \
(assert((uint32_t)(_index) < (self)->size), &(self)->contents[_index])
/// Get a pointer to the first element in the array.
#define array_front(self) array_get(self, 0)
/// Get a pointer to the last element in the array.
#define array_back(self) array_get(self, (self)->size - 1)
/// Clear the array, setting its size to zero. Note that this does not free any
/// memory allocated for the array's contents.
#define array_clear(self) ((self)->size = 0)
/// Reserve `new_capacity` elements of space in the array. If `new_capacity` is
/// less than the array's current capacity, this function has no effect.
#define array_reserve(self, new_capacity) \
_array__reserve((Array *)(self), array_elem_size(self), new_capacity)
/// Free any memory allocated for this array. Note that this does not free any
/// memory allocated for the array's contents.
#define array_delete(self) _array__delete((Array *)(self))
/// Push a new `element` onto the end of the array.
#define array_push(self, element) \
(_array__grow((Array *)(self), 1, array_elem_size(self)), \
(self)->contents[(self)->size++] = (element))
/// Increase the array's size by `count` elements.
/// New elements are zero-initialized.
#define array_grow_by(self, count) \
do { \
if ((count) == 0) break; \
_array__grow((Array *)(self), count, array_elem_size(self)); \
memset((self)->contents + (self)->size, 0, (count) * array_elem_size(self)); \
(self)->size += (count); \
} while (0)
/// Append all elements from one array to the end of another.
#define array_push_all(self, other) \
array_extend((self), (other)->size, (other)->contents)
/// Append `count` elements to the end of the array, reading their values from the
/// `contents` pointer.
#define array_extend(self, count, contents) \
_array__splice( \
(Array *)(self), array_elem_size(self), (self)->size, \
0, count, contents \
)
/// Remove `old_count` elements from the array starting at the given `index`. At
/// the same index, insert `new_count` new elements, reading their values from the
/// `new_contents` pointer.
#define array_splice(self, _index, old_count, new_count, new_contents) \
_array__splice( \
(Array *)(self), array_elem_size(self), _index, \
old_count, new_count, new_contents \
)
/// Insert one `element` into the array at the given `index`.
#define array_insert(self, _index, element) \
_array__splice((Array *)(self), array_elem_size(self), _index, 0, 1, &(element))
/// Remove one element from the array at the given `index`.
#define array_erase(self, _index) \
_array__erase((Array *)(self), array_elem_size(self), _index)
/// Pop the last element off the array, returning the element by value.
#define array_pop(self) ((self)->contents[--(self)->size])
/// Assign the contents of one array to another, reallocating if necessary.
#define array_assign(self, other) \
_array__assign((Array *)(self), (const Array *)(other), array_elem_size(self))
/// Swap one array with another
#define array_swap(self, other) \
_array__swap((Array *)(self), (Array *)(other))
/// Get the size of the array contents
#define array_elem_size(self) (sizeof *(self)->contents)
/// Search a sorted array for a given `needle` value, using the given `compare`
/// callback to determine the order.
///
/// If an existing element is found to be equal to `needle`, then the `index`
/// out-parameter is set to the existing value's index, and the `exists`
/// out-parameter is set to true. Otherwise, `index` is set to an index where
/// `needle` should be inserted in order to preserve the sorting, and `exists`
/// is set to false.
#define array_search_sorted_with(self, compare, needle, _index, _exists) \
_array__search_sorted(self, 0, compare, , needle, _index, _exists)
/// Search a sorted array for a given `needle` value, using integer comparisons
/// of a given struct field (specified with a leading dot) to determine the order.
///
/// See also `array_search_sorted_with`.
#define array_search_sorted_by(self, field, needle, _index, _exists) \
_array__search_sorted(self, 0, _compare_int, field, needle, _index, _exists)
/// Insert a given `value` into a sorted array, using the given `compare`
/// callback to determine the order.
#define array_insert_sorted_with(self, compare, value) \
do { \
unsigned _index, _exists; \
array_search_sorted_with(self, compare, &(value), &_index, &_exists); \
if (!_exists) array_insert(self, _index, value); \
} while (0)
/// Insert a given `value` into a sorted array, using integer comparisons of
/// a given struct field (specified with a leading dot) to determine the order.
///
/// See also `array_search_sorted_by`.
#define array_insert_sorted_by(self, field, value) \
do { \
unsigned _index, _exists; \
array_search_sorted_by(self, field, (value) field, &_index, &_exists); \
if (!_exists) array_insert(self, _index, value); \
} while (0)
// Private
typedef Array(void) Array;
/// This is not what you're looking for, see `array_delete`.
static inline void _array__delete(Array *self) {
if (self->contents) {
ts_free(self->contents);
self->contents = NULL;
self->size = 0;
self->capacity = 0;
}
}
/// This is not what you're looking for, see `array_erase`.
static inline void _array__erase(Array *self, size_t element_size,
uint32_t index) {
assert(index < self->size);
char *contents = (char *)self->contents;
memmove(contents + index * element_size, contents + (index + 1) * element_size,
(self->size - index - 1) * element_size);
self->size--;
}
/// This is not what you're looking for, see `array_reserve`.
static inline void _array__reserve(Array *self, size_t element_size, uint32_t new_capacity) {
if (new_capacity > self->capacity) {
if (self->contents) {
self->contents = ts_realloc(self->contents, new_capacity * element_size);
} else {
self->contents = ts_malloc(new_capacity * element_size);
}
self->capacity = new_capacity;
}
}
/// This is not what you're looking for, see `array_assign`.
static inline void _array__assign(Array *self, const Array *other, size_t element_size) {
_array__reserve(self, element_size, other->size);
self->size = other->size;
memcpy(self->contents, other->contents, self->size * element_size);
}
/// This is not what you're looking for, see `array_swap`.
static inline void _array__swap(Array *self, Array *other) {
Array swap = *other;
*other = *self;
*self = swap;
}
/// This is not what you're looking for, see `array_push` or `array_grow_by`.
static inline void _array__grow(Array *self, uint32_t count, size_t element_size) {
uint32_t new_size = self->size + count;
if (new_size > self->capacity) {
uint32_t new_capacity = self->capacity * 2;
if (new_capacity < 8) new_capacity = 8;
if (new_capacity < new_size) new_capacity = new_size;
_array__reserve(self, element_size, new_capacity);
}
}
/// This is not what you're looking for, see `array_splice`.
static inline void _array__splice(Array *self, size_t element_size,
uint32_t index, uint32_t old_count,
uint32_t new_count, const void *elements) {
uint32_t new_size = self->size + new_count - old_count;
uint32_t old_end = index + old_count;
uint32_t new_end = index + new_count;
assert(old_end <= self->size);
_array__reserve(self, element_size, new_size);
char *contents = (char *)self->contents;
if (self->size > old_end) {
memmove(
contents + new_end * element_size,
contents + old_end * element_size,
(self->size - old_end) * element_size
);
}
if (new_count > 0) {
if (elements) {
memcpy(
(contents + index * element_size),
elements,
new_count * element_size
);
} else {
memset(
(contents + index * element_size),
0,
new_count * element_size
);
}
}
self->size += new_count - old_count;
}
/// A binary search routine, based on Rust's `std::slice::binary_search_by`.
/// This is not what you're looking for, see `array_search_sorted_with` or `array_search_sorted_by`.
#define _array__search_sorted(self, start, compare, suffix, needle, _index, _exists) \
do { \
*(_index) = start; \
*(_exists) = false; \
uint32_t size = (self)->size - *(_index); \
if (size == 0) break; \
int comparison; \
while (size > 1) { \
uint32_t half_size = size / 2; \
uint32_t mid_index = *(_index) + half_size; \
comparison = compare(&((self)->contents[mid_index] suffix), (needle)); \
if (comparison <= 0) *(_index) = mid_index; \
size -= half_size; \
} \
comparison = compare(&((self)->contents[*(_index)] suffix), (needle)); \
if (comparison == 0) *(_exists) = true; \
else if (comparison < 0) *(_index) += 1; \
} while (0)
/// Helper macro for the `_sorted_by` routines below. This takes the left (existing)
/// parameter by reference in order to work with the generic sorting function above.
#define _compare_int(a, b) ((int)*(a) - (int)(b))
#ifdef _MSC_VER
#pragma warning(pop)
#elif defined(__GNUC__) || defined(__clang__)
#pragma GCC diagnostic pop
#endif
#ifdef __cplusplus
}
#endif
#endif // TREE_SITTER_ARRAY_H_

View File

@@ -1,286 +0,0 @@
#ifndef TREE_SITTER_PARSER_H_
#define TREE_SITTER_PARSER_H_
#ifdef __cplusplus
extern "C" {
#endif
#include <stdbool.h>
#include <stdint.h>
#include <stdlib.h>
#define ts_builtin_sym_error ((TSSymbol)-1)
#define ts_builtin_sym_end 0
#define TREE_SITTER_SERIALIZATION_BUFFER_SIZE 1024
#ifndef TREE_SITTER_API_H_
typedef uint16_t TSStateId;
typedef uint16_t TSSymbol;
typedef uint16_t TSFieldId;
typedef struct TSLanguage TSLanguage;
typedef struct TSLanguageMetadata {
uint8_t major_version;
uint8_t minor_version;
uint8_t patch_version;
} TSLanguageMetadata;
#endif
typedef struct {
TSFieldId field_id;
uint8_t child_index;
bool inherited;
} TSFieldMapEntry;
// Used to index the field and supertype maps.
typedef struct {
uint16_t index;
uint16_t length;
} TSMapSlice;
typedef struct {
bool visible;
bool named;
bool supertype;
} TSSymbolMetadata;
typedef struct TSLexer TSLexer;
struct TSLexer {
int32_t lookahead;
TSSymbol result_symbol;
void (*advance)(TSLexer *, bool);
void (*mark_end)(TSLexer *);
uint32_t (*get_column)(TSLexer *);
bool (*is_at_included_range_start)(const TSLexer *);
bool (*eof)(const TSLexer *);
void (*log)(const TSLexer *, const char *, ...);
};
typedef enum {
TSParseActionTypeShift,
TSParseActionTypeReduce,
TSParseActionTypeAccept,
TSParseActionTypeRecover,
} TSParseActionType;
typedef union {
struct {
uint8_t type;
TSStateId state;
bool extra;
bool repetition;
} shift;
struct {
uint8_t type;
uint8_t child_count;
TSSymbol symbol;
int16_t dynamic_precedence;
uint16_t production_id;
} reduce;
uint8_t type;
} TSParseAction;
typedef struct {
uint16_t lex_state;
uint16_t external_lex_state;
} TSLexMode;
typedef struct {
uint16_t lex_state;
uint16_t external_lex_state;
uint16_t reserved_word_set_id;
} TSLexerMode;
typedef union {
TSParseAction action;
struct {
uint8_t count;
bool reusable;
} entry;
} TSParseActionEntry;
typedef struct {
int32_t start;
int32_t end;
} TSCharacterRange;
struct TSLanguage {
uint32_t abi_version;
uint32_t symbol_count;
uint32_t alias_count;
uint32_t token_count;
uint32_t external_token_count;
uint32_t state_count;
uint32_t large_state_count;
uint32_t production_id_count;
uint32_t field_count;
uint16_t max_alias_sequence_length;
const uint16_t *parse_table;
const uint16_t *small_parse_table;
const uint32_t *small_parse_table_map;
const TSParseActionEntry *parse_actions;
const char * const *symbol_names;
const char * const *field_names;
const TSMapSlice *field_map_slices;
const TSFieldMapEntry *field_map_entries;
const TSSymbolMetadata *symbol_metadata;
const TSSymbol *public_symbol_map;
const uint16_t *alias_map;
const TSSymbol *alias_sequences;
const TSLexerMode *lex_modes;
bool (*lex_fn)(TSLexer *, TSStateId);
bool (*keyword_lex_fn)(TSLexer *, TSStateId);
TSSymbol keyword_capture_token;
struct {
const bool *states;
const TSSymbol *symbol_map;
void *(*create)(void);
void (*destroy)(void *);
bool (*scan)(void *, TSLexer *, const bool *symbol_whitelist);
unsigned (*serialize)(void *, char *);
void (*deserialize)(void *, const char *, unsigned);
} external_scanner;
const TSStateId *primary_state_ids;
const char *name;
const TSSymbol *reserved_words;
uint16_t max_reserved_word_set_size;
uint32_t supertype_count;
const TSSymbol *supertype_symbols;
const TSMapSlice *supertype_map_slices;
const TSSymbol *supertype_map_entries;
TSLanguageMetadata metadata;
};
static inline bool set_contains(const TSCharacterRange *ranges, uint32_t len, int32_t lookahead) {
uint32_t index = 0;
uint32_t size = len - index;
while (size > 1) {
uint32_t half_size = size / 2;
uint32_t mid_index = index + half_size;
const TSCharacterRange *range = &ranges[mid_index];
if (lookahead >= range->start && lookahead <= range->end) {
return true;
} else if (lookahead > range->end) {
index = mid_index;
}
size -= half_size;
}
const TSCharacterRange *range = &ranges[index];
return (lookahead >= range->start && lookahead <= range->end);
}
/*
* Lexer Macros
*/
#ifdef _MSC_VER
#define UNUSED __pragma(warning(suppress : 4101))
#else
#define UNUSED __attribute__((unused))
#endif
#define START_LEXER() \
bool result = false; \
bool skip = false; \
UNUSED \
bool eof = false; \
int32_t lookahead; \
goto start; \
next_state: \
lexer->advance(lexer, skip); \
start: \
skip = false; \
lookahead = lexer->lookahead;
#define ADVANCE(state_value) \
{ \
state = state_value; \
goto next_state; \
}
#define ADVANCE_MAP(...) \
{ \
static const uint16_t map[] = { __VA_ARGS__ }; \
for (uint32_t i = 0; i < sizeof(map) / sizeof(map[0]); i += 2) { \
if (map[i] == lookahead) { \
state = map[i + 1]; \
goto next_state; \
} \
} \
}
#define SKIP(state_value) \
{ \
skip = true; \
state = state_value; \
goto next_state; \
}
#define ACCEPT_TOKEN(symbol_value) \
result = true; \
lexer->result_symbol = symbol_value; \
lexer->mark_end(lexer);
#define END_STATE() return result;
/*
* Parse Table Macros
*/
#define SMALL_STATE(id) ((id) - LARGE_STATE_COUNT)
#define STATE(id) id
#define ACTIONS(id) id
#define SHIFT(state_value) \
{{ \
.shift = { \
.type = TSParseActionTypeShift, \
.state = (state_value) \
} \
}}
#define SHIFT_REPEAT(state_value) \
{{ \
.shift = { \
.type = TSParseActionTypeShift, \
.state = (state_value), \
.repetition = true \
} \
}}
#define SHIFT_EXTRA() \
{{ \
.shift = { \
.type = TSParseActionTypeShift, \
.extra = true \
} \
}}
#define REDUCE(symbol_name, children, precedence, prod_id) \
{{ \
.reduce = { \
.type = TSParseActionTypeReduce, \
.symbol = symbol_name, \
.child_count = children, \
.dynamic_precedence = precedence, \
.production_id = prod_id \
}, \
}}
#define RECOVER() \
{{ \
.type = TSParseActionTypeRecover \
}}
#define ACCEPT_INPUT() \
{{ \
.type = TSParseActionTypeAccept \
}}
#ifdef __cplusplus
}
#endif
#endif // TREE_SITTER_PARSER_H_

View File

@@ -1,9 +0,0 @@
{
languages = {
crepuscular = {
extensions = ["crr"],
indent = " ",
grammar.source.path = "./libtree-sitter-crepuscular.so",
}
}
}

View File

@@ -1,41 +0,0 @@
{
"$schema": "https://tree-sitter.github.io/tree-sitter/assets/schemas/config.schema.json",
"grammars": [
{
"name": "crepuscular",
"camelcase": "Crepuscular",
"title": "Crepuscular",
"scope": "source.crepuscular",
"file-types": [
"crr"
],
"injection-regex": "^crepuscular$",
"class-name": "TreeSitterCrepuscular",
"highlights": "queries/highlights.scm"
}
],
"metadata": {
"version": "0.1.0",
"license": "MIT",
"description": "Parser for the Crepuscular(ray) functional programming language",
"authors": [
{
"name": "Alexander Nutz",
"email": "alexander.nutz@vxcc.dev",
"url": "https://alex.vxcc.dev/"
}
],
"links": {
"repository": "https://gitea.vxcc.dev/alexander.nutz/beginner-friendly-lang"
}
},
"bindings": {
"c": true,
"go": true,
"node": true,
"python": true,
"rust": true,
"swift": true,
"zig": false
}
}