srdatalog.ir.codegen.cuda.build.cache¶
JIT cache directory management + batch file writer.
Port of src/srdatalog/codegen/target_jit/jit_file.nim:
getJitCacheDir/ensureJitCacheDirJitBatchManagerwithaddKernel/writeBatchFiles/writeSchemaHeader/writeKernelDeclHeader
Python uses ~/.cache/srdatalog/jit/<project>_<hash>/ (vs Nim’s
~/.cache/nim/jit/...) so the two toolchains don’t clobber each
other’s caches. Callers that need Nim-compatible output can pass an
explicit cache dir via the cache_dir arg.
The one-shot entry point write_jit_project() glues everything
together — given the string outputs from main_file.py + per-rule
complete runner emissions, it lays out the full .cpp tree on disk.
Set SRDATALOG_SKIP_JIT_REGEN=1 to reuse existing files (debugging
mode — matches Nim’s behavior).
Module Contents¶
Classes¶
Shards per-rule runner code across fixed-size batch files, then writes them + the schema/kernel headers to the cache dir. |
|
Functions¶
Create the cache dir if needed; return the path. |
|
|
|
Lay out the full .cpp tree for a project. |
Data¶
API¶
- srdatalog.ir.codegen.cuda.build.cache.JIT_COMMON_INCLUDES = <Multiline-String>¶
- srdatalog.ir.codegen.cuda.build.cache.JIT_FILE_FOOTER = <Multiline-String>¶
- class srdatalog.ir.codegen.cuda.build.cache.JitBatchManager(project_name: str, rules_per_batch: int = _DEFAULT_RULES_PER_BATCH, cache_base: str | None = None)[source]¶
Shards per-rule runner code across fixed-size batch files, then writes them + the schema/kernel headers to the cache dir.
Mirrors Nim’s
JitBatchManagerinjit_file.nim:100-270.Initialization
- add_kernel(kernel_code: str, rule_name: str | None = None) None[source]¶
Add one
JitRunner_<rule>struct (complete with global kernels) to the next batch slot.
- class srdatalog.ir.codegen.cuda.build.cache.JitProjectLayout[source]¶
Bases:
typing.TypedDict
- srdatalog.ir.codegen.cuda.build.cache.MAX_BATCH_FILES¶
16
- srdatalog.ir.codegen.cuda.build.cache.ensure_jit_cache_dir(project_name: str, base: str | None = None) str[source]¶
Create the cache dir if needed; return the path.
- srdatalog.ir.codegen.cuda.build.cache.get_jit_cache_dir(project_name: str, base: str | None = None) str[source]¶
~/.cache/srdatalog/jit/<project>_<hash4>/.baseoverrides~/.cache/srdatalog— e.g. tests pass a tmpdir.
- srdatalog.ir.codegen.cuda.build.cache.write_jit_project(project_name: str, main_file_content: str, per_rule_runners: list[tuple[str, str]], *, schema_definitions: str = '', db_type_alias: str = '', extra_headers: list[str] | None = None, cache_base: str | None = None, main_file_name: str = 'main.cpp') srdatalog.ir.codegen.cuda.build.cache.JitProjectLayout[source]¶
Lay out the full .cpp tree for a project.
Args: project_name: cache dir name (e.g. “TrianglePlan_DB”). main_file_content: output of
main_file.gen_main_file_content. per_rule_runners: list of(rule_name, full_runner_cpp)tuples — typically thefullreturned bycomplete_runner.gen_complete_runnerfor each non-materializedExecutePipeline. Gets sharded across jit_batch_N.cpp files. schema_definitions: optional project schema header content. db_type_alias: optional DB type alias string (inlined into each batch file for template derivation). extra_headers: per-rule plugin headers (e.g. “gpu/device_2level_index.h”) #include’d into every batch file. cache_base: override~/.cache/srdatalog(tests pass a tmpdir). main_file_name: output name for the top-level main file.Returns: dict with keys
dir,main,batches(list[str]),schema_header,kernel_header(possibly “”) — every path absolute.