Welcome to my blog, the following topics are covered here;

GitHub Spec Kit in 2026: SDD Goes Mainstream 🚀

Six months ago, we explored how GitHub Spec Kit was beginning to reshape software development. In early 2026, that promise isn’t just materializing — it’s accelerating. The project has hit version 0.5.0, the ecosystem has exploded, and Spec-Driven Development has transitioned from “interesting idea” to actual industry standard. Here’s what’s changed, and why you should care. The Big Shift: From Framework to Platform GitHub Spec Kit is no longer just a lightweight documentation toolkit. As of April 2026, it’s evolved into a full extensibility platform that works across the entire AI-assisted development landscape. ...

April 4, 2026 · 5 min · James M

Mac Homebrew packages

Essential bat - Cat alternative with syntax highlighting and Git integration fzf - Fuzzy finder for CLI (command history, file search, etc.) glow - Markdown reader in the terminal htop - Interactive process monitor with colors and mouse support jq - JSON query and manipulation tool (sed for JSON) pyenv - Python version manager python - Python (3.11+) ripgrep (rg) - Fast, recursive grep alternative terraform - Infrastructure as code provisioning tfswitch - Switch Terraform versions easily (warrensbox/tap/tfswitch) tree - Display directory structure visually wget - Command-line file downloader yq - YAML/JSON/XML processor and querying tool Cloud & Container Tools awscli - AWS Command Line Interface docker - Container platform and runtime gcloud - Google Cloud CLI helm - Kubernetes package manager k9s - Interactive Kubernetes resource viewer and manager kubectl - Kubernetes command-line tool kubectx - Switch between Kubernetes clusters and namespaces minikube - Run Kubernetes locally in a VM Development Languages & Frameworks django - Python web framework go - Go programming language nvm - Node.js version manager npm - Node Package Manager pytorch - Machine learning framework for deep learning rbenv - Ruby version manager rust - Rust programming language tensorflow - ML library for machine learning and AI DevOps & Infrastructure Tools ansible - Configuration management and automation consul - Service mesh and service discovery hashicorp/tap/vault - Secrets management tool packer - Machine image builder prometheus - Metrics collection and monitoring System & Network Tools bottom - System monitor (process, memory, disk, network) dust - Disk usage analyzer (better than du) exa - Modern ls replacement with colors and icons fd - Fast find alternative lnav - Log file analyzer and explorer mtr - Network diagnostic combining ping and traceroute speedtest-cli - Test internet upload/download speed tldr - Simplified man pages with practical examples File & Directory Tools fzf - Fuzzy finder for interactive searching midnight-commander - Full-screen file manager (mc) ncdu - Disk space usage analyzer ranger - Terminal file manager with preview support Productivity & Utilities direnv - Load environment variables based on directory glow - Markdown reader for the terminal httpie - HTTP CLI client (curl alternative) jupyter - Interactive notebooks for data science navi - Interactive cheatsheet and command browser task - Task management and todo app tmux - Terminal multiplexer (multiple sessions/panes) Database & Data Tools postgresql - PostgreSQL database client redis-cli - Redis key-value store client sqlite - Lightweight embedded database Additional Utilities neofetch - System information display snappy - Compression library for fast compression/decompression youtube-dl - Download videos from YouTube and other sites Related Pages Mac Applications & Utilities

April 4, 2026 · 3 min · James M

Mac Applications & Utilities

Productivity & Writing Microsoft 365 💰 — Suite of office applications Excel — Spreadsheet application OneNote — Digital note-taking Outlook — Email & calendar management Powerpoint — Presentation software Word — Document writing Notion 💰 — All-in-one workspace for notes, databases, and project management Obsidian 💰 — Private markdown-based writing and knowledge management app MindNode 💰 — Mind mapping and brainstorming tool Development & Version Control PyCharm 💰 — Comprehensive Python IDE with debugging and testing tools GitKraken 💰 — Powerful visual Git client with integrated workflows iTerm2 🆓 — Advanced terminal emulator with split panes and extensive customization FileZilla 🆓 — FTP, FTPS & SFTP client for file transfer Kaleidoscope 💰 — Visual diff tool for comparing text, images, and folders VisualDiffer 💰 — Advanced folder & file comparison utility File & System Management Path Finder 💰 — Advanced file manager with extended functionality The Unarchiver 🆓 — Open any archive format (ZIP, RAR, 7z, etc.) CleanMyMac 💰 — System cleanup and optimization utility Disk Space Analyzer Pro 💰 — Visualize and reclaim disk space DiskCatalogMaker 💰 — Create and manage disc catalogs DirEqual 💰 — Compare and sync folder contents CloudMounter 💰 — Mount cloud storage (Dropbox, Google Drive, OneDrive, S3) as local drives Google Drive 🆓 — Cloud storage and file sync OneDrive 🆓 — Microsoft cloud storage integration Text & Data Tools TextSoap 💰 — Batch text transformations and cleanup Text Workflow 💰 — Automation engine for text manipulation Easy Data Transform 💰 — Data merging, splitting, cleaning without coding Pure Paste 💰 — Clipboard manager that pastes as plain text by default Screenshot & Media CleanShot 💰 — Professional screenshot and screen recording tool Loom 🆓 — Screen recording and video messaging VLC 🆓 — Universal media player supporting all formats OmniPlayer Pro 💰 — Advanced audio and video player Communication & Collaboration Slack 💰 — Team messaging and collaboration platform Teams 💰 — Microsoft unified communications platform Otter 💰 — AI-powered voice transcription System Utilities Alfred 💰 — Productivity launcher with hotkeys, snippets, and workflows Amphetamine 💰 — Keep-awake utility to prevent sleep LastPass 💰 — Password manager with secure autofill NordVPN 💰 — VPN for privacy and security Virtualization & System Extension Parallels 💰 — Run Windows, Linux, or other OS alongside macOS iPhone Apps Just Press Record 💰 — Recording and transcription with iCloud sync Otter 💰 — Voice-to-text transcription Legend: 🆓 = Free | 💰 = Paid/Freemium ...

April 4, 2026 · 2 min · James M

List of Data Engineering & Data Science Courses

Data Engineering Professional Certificates (Industry-Backed) IBM Data Engineering Professional Certificate (Coursera) DeepLearning.AI Data Engineering Professional Certificate MIT xPRO Professional Certificate in Data Engineering - 6 months, $7,900 A Cloud Guru Apache Kafka Deep Dive AWS Certified Big Data Specialty Google Certified Professional Data Engineer Microsoft Certified: Azure Data Engineer Associate (DP-700) - Updated for Microsoft Fabric Coursera Introduction to Data Engineering Master Real-Time Streaming with Kafka & Spark - Updated Jan 2026 Data Science with Databricks for Data Analysts Specialization DataCamp Building Data Engineering Pipelines in Python Database Design ETL in Python Introduction to Airflow in Python Introduction to Data Engineering NoSQL Concepts Streaming Concepts Understanding Data Engineering Google Building Batch Data Pipelines on Google Cloud Building Resilient Streaming Analytics Systems on Google Cloud Modernizing Data Lakes and Data Warehouses with Google Cloud Preparing for the Google Cloud Professional Data Engineer Exam Serverless Data Processing with Dataflow: Develop Pipelines Serverless Data Processing with Dataflow: Foundations Serverless Data Processing with Dataflow: Operations Udacity Data Engineer Nanodegree Data Streaming Nanodegree Udemy Taming Big Data with Apache Spark and Python - Hands On! Data Engineering using Kafka and Spark Structured Streaming TutorialsPoint Apache Spark Certification - Big Data, Hadoop, Kafka, ML with Spark Whizlabs Apache Kafka Fundamentals Databricks Certified Associate Developer for Apache Spark (Python) Databricks Certified Data Analyst Associate Certification Databricks Certified Data Engineer Associate Certification Databricks Certified Data Engineer Professional Certification Snowflake SnowPro Core Certification Simplilearn Post Graduate Program in Data Engineering Class Central 1700+ Data Engineering Courses 700+ Apache Kafka Courses Data Science Professional Certificates (Industry-Backed) IBM Data Science Professional Certificate (Coursera) Google Advanced Data Analytics Professional Certificate A Cloud Guru Introduction to Machine Learning Coursera Data Science with Databricks for Data Analysts Specialization DataCamp Introduction to Data Science in Python Python Data Science Toolbox (Part 1) Google Data Science Foundations Data Science with Python Google Cloud Big Data and Machine Learning Fundamentals Intro to TensorFlow for Deep Learning Learn Python basics for data analysis Machine Learning Crash Course Smart Analytics, Machine Learning, and AI on Google Cloud Udemy AWS Certified Machine Learning Specialty 2023 - Hands On! Whizlabs AWS Certified Machine Learning Specialty Databricks Certified Machine Learning Associate Certification Databricks Certified Machine Learning Professional Certification Introduction to Data Science with Python TensorFlow for Deep Learning with Python Additional Learning Resources Aggregator Platforms Class Central - Data Science Courses - Discover free & paid courses BitDegree - Best Data Science Courses - 2026 updated rankings For Cloud Certifications When choosing between cloud platforms: ...

April 4, 2026 · 3 min · James M

Databricks Training & Certification

Databricks offers several certification tracks for data engineers, data analysts, ML engineers, and generative AI engineers at both associate and professional levels. Choose based on your role and experience level. All certifications are valid for 2 years and cost $200 per exam attempt. Official Databricks Resources Start here for authoritative training materials and exam information: Databricks Training & Certification - Official certification hub Databricks Certification - Certification details and exam scheduling Databricks Learning Library - Full course catalog Databricks Learn - Free learning resources and documentation Free Official Courses Lakehouse Platform Fundamentals - Free foundational course with accreditation (4 video tutorials + knowledge test) Databricks Fundamentals - Core platform concepts Certification Tracks Data Engineer Build and optimize data pipelines on the Lakehouse platform. Recent updates (July 2025) now emphasize DLT, Unity Catalog, Delta Sharing, Lakehouse Federation, and Auto Loader. ...

April 4, 2026 · 2 min · James M

Databricks CheatSheet

Databricks Notebook Commands Command Purpose Example %config Set configuration options for the notebook %env Set environment variables %fs Interact with the Databricks file system %fs ls dbfs:/repo %load Loads the contents of a file into a cell %lsmagic List all magic commands %jobs Lists all running jobs %matplotlib sets up the matplotlib backend %md Write Markdown text %pip Install Python packages %python Executes python code %python dbutils.fs.rm("/user/hive/warehouse/test/", True) %r Execute R code %reload reloads module contents %run Executes a Python file or a notebook %scala Executes scala code %sh Executes shell commands on the cluster nodes %sh git clone https://github.com/repo/test %sql Executes SQL queries %who Lists all the variables in the current scope Notebook Widgets # Create widgets dbutils.widgets.text("param_name", "default_value", "label") dbutils.widgets.dropdown("param_name", "default", ["option1", "option2"]) dbutils.widgets.multiselect("param_name", "default", ["option1", "option2"]) dbutils.widgets.combobox("param_name", "default", ["option1", "option2"]) # Get widget values param_value = dbutils.widgets.get("param_name") # Remove widget dbutils.widgets.remove("param_name") dbutils.widgets.removeAll() Secrets Management # Create secret scope dbutils.secrets.createScope("scope_name") # Store secret dbutils.secrets.put("scope_name", "secret_key", "secret_value") # Retrieve secret secret_value = dbutils.secrets.get("scope_name", "secret_key") # List secrets dbutils.secrets.list("scope_name") # Delete secret dbutils.secrets.delete("scope_name", "secret_key") Accessing Files /path/to/file (local) dbfs:/path/to/file (DBFS) file:/path/to/file (driver filesystem) s3://path/to/file (S3) /Volumes/catalog/schema/volume/path (Unity Catalog Volumes) Copying Files %fs cp file:/<path> /Volumes/<catalog>/<schema>/<volume>/<path> %python dbutils.fs.cp("file:/<path>", "/Volumes/<catalog>/<schema>/<volume>/<path>") %python dbutils.fs.cp("file:/databricks/driver/test", "dbfs:/repo", True) %sh cp /<path> /Volumes/<catalog>/<schema>/<volume>/<path> SQL Statements (DDL) Create & Use Schema CREATE SCHEMA test; CREATE SCHEMA custom LOCATION 'dbfs:/custom'; USE SCHEMA test; Unity Catalog (UC) -- Create catalog CREATE CATALOG my_catalog COMMENT "Production catalog"; -- Create schema in UC CREATE SCHEMA my_catalog.my_schema; USE CATALOG my_catalog; USE SCHEMA my_schema; -- Create volume (for files) CREATE VOLUME my_catalog.my_schema.my_volume; ALTER VOLUME my_catalog.my_schema.my_volume OWNER TO `team@company.com`; -- List catalogs, schemas, volumes SHOW CATALOGS; SHOW SCHEMAS IN my_catalog; SHOW VOLUMES IN my_catalog.my_schema; -- Grant permissions GRANT USAGE ON CATALOG my_catalog TO `user@company.com`; GRANT READ_VOLUME ON VOLUME my_catalog.my_schema.my_volume TO `user@company.com`; Create Table CREATE TABLE test(col1 INT, col2 STRING, col3 STRING, col4 BIGINT, col5 INT, col6 FLOAT); CREATE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE TABLE test USING CSV LOCATION '/repo/data/test.csv'; CREATE TABLE test USING CSV OPTIONS (header="true") LOCATION '/repo/data/test.csv'; CREATE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE TABLE test AS ... CREATE TABLE test USING ... CREATE TABLE test(id INT, title STRING, col1 STRING, publish_time BIGINT, pages INT, price FLOAT) COMMENT 'This is comment for the table itself'; CREATE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.json', format => 'json'); CREATE TABLE test_raw AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv', sep => ';'); CREATE TABLE custom_table_test LOCATION 'dbfs:/custom-table' AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE TABLE test PARTITIONED BY (col1) AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv') CREATE TABLE users( firstname STRING, lastname STRING, full_name STRING GENERATED ALWAYS AS (concat(firstname, ' ', lastname)) ); CREATE OR REPLACE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE OR REPLACE TABLE test AS SELECT * FROM json.`/repo/data/test.json`; CREATE OR REPLACE TABLE test AS SELECT * FROM read_files('/repo/data/test.csv'); Create View CREATE VIEW view_test AS SELECT * FROM test WHERE col1 = 'test'; CREATE VIEW view_test AS SELECT col1, col1 FROM test JOIN test2 ON test.col2 == test2.col2; CREATE TEMP VIEW temp_test AS SELECT * FROM test WHERE col1 = 'test'; CREATE TEMP VIEW temp_test AS SELECT * FROM read_files('/repo/data/test.csv'); CREATE GLOBAL TEMP VIEW view_test AS SELECT * FROM test WHERE col1 = 'test'; SELECT * FROM global_temp.view_test; CREATE TEMP VIEW jdbc_example USING JDBC OPTIONS ( url "<jdbc-url>", dbtable "<table-name>", user '<username>', password '<password>'); CREATE OR REPLACE TEMP VIEW test AS SELECT * FROM delta.`<logpath>`; CREATE VIEW event_log_raw AS SELECT * FROM event_log("<pipeline-id>"); CREATE OR REPLACE TEMP VIEW test_view AS SELECT test.col1 AS col1 FROM test_table WHERE col1 = 'value1' ORDER BY timestamp DESC LIMIT 1; Drop DROP TABLE test; Describe SHOW TABLES; DESCRIBE EXTENDED test; SQL Statements (DML) Select SELECT * FROM csv.`/repo/data/test.csv`; SELECT * FROM read_files('/repo/data/test.csv'); SELECT * FROM read_files('/repo/data/test.csv', format => 'csv', header => 'true', sep => ',') SELECT * FROM json.`/repo/data/test.json`; SELECT * FROM json.`/repo/data/*.json`; SELECT * FROM test WHERE year(from_unixtime(test_time)) > 1900; SELECT * FROM test WHERE title LIKE '%a%' SELECT * FROM test WHERE title LIKE 'a%' SELECT * FROM test WHERE title LIKE '%a' SELECT * FROM test TIMESTAMP AS OF '2024-01-01T00:00:00.000Z'; SELECT * FROM test VERSION AS OF 2; SELECT * FROM test@v2; SELECT * FROM event_log("<pipeline-id>"); SELECT count(*) FROM VALUES (NULL), (10), (10) AS example(col); SELECT count(col) FROM VALUES (NULL), (10), (10) AS example(col); SELECT count_if(col1 = 'test') FROM test; SELECT from_unixtime(test_time) FROM test; SELECT cast(test_time / 1 AS timestamp) FROM test; SELECT cast(cast(test_time AS BIGINT) AS timestamp) FROM test; SELECT element.sub_element FROM test; SELECT flatten(array(array(1, 2), array(3, 4))); SELECT * FROM ( SELECT col1, col2 FROM test ) PIVOT ( sum(col1) for col2 in ('item1','item2') ); SELECT *, CASE WHEN col1 > 10 THEN 'value1' ELSE 'value2' END FROM test; SELECT * FROM test ORDER BY (CASE WHEN col1 > 10 THEN col2 ELSE col3 END); WITH t(col1, col2) AS (SELECT 1, 2) SELECT * FROM t WHERE col1 = 1; SELECT details:flow_definition.output_dataset as output_dataset, details:flow_definition.input_datasets as input_dataset FROM event_log_raw, latest_update WHERE event_type = 'flow_definition' AND origin.update_id = latest_update.id; Insert INSERT OVERWRITE test SELECT * FROM read_files('/repo/data/test.csv'); INSERT INTO test(col1, col2) VALUES ('value1', 'value2'); Merge Into MERGE INTO test USING test_to_delete ON test.col1 = test_to_delete.col1 WHEN MATCHED THEN DELETE; MERGE INTO test USING test_to_update ON test.col1 = test_to_update.col1 WHEN MATCHED THEN UPDATE SET *; MERGE INTO test USING test_to_insert ON test.col1 = test_to_insert.col1 WHEN NOT MATCHED THEN INSERT *; Copy Into COPY INTO test FROM '/repo/data' FILEFORMAT = CSV FILES = ('test.csv') FORMAT_OPTIONS('header' = 'true', 'inferSchema' = 'true'); Spark DataFrame API Read Data # Read CSV df = spark.read.format("csv").option("header", "true").load("/path/to/file.csv") df = spark.read.csv("/path/to/file.csv", header=True) # Read Parquet df = spark.read.parquet("/path/to/file.parquet") # Read JSON df = spark.read.json("/path/to/file.json") # Read Delta table df = spark.read.table("my_table") df = spark.read.format("delta").load("/path/to/delta/table") # Read from Volumes df = spark.read.csv("/Volumes/catalog/schema/volume/file.csv", header=True) Write Data # Write modes: overwrite, append, ignore, error df.write.mode("overwrite").format("parquet").save("/path/to/output") df.write.mode("overwrite").option("mergeSchema", "true").format("delta").save("/path/to/delta") # Write to table df.write.mode("overwrite").saveAsTable("my_table") df.write.mode("overwrite").option("path", "/path").saveAsTable("my_table") # Write to Volume df.write.mode("overwrite").parquet("/Volumes/catalog/schema/volume/output") Common Transformations # Select columns df.select("col1", "col2") df.select(df.col1, df.col2) # Filter/Where df.filter(df.col1 > 10) df.where("col1 > 10") # GroupBy and aggregations df.groupBy("col1").agg({"col2": "sum", "col3": "count"}) from pyspark.sql.functions import sum, count, avg df.groupBy("col1").agg(sum("col2"), count("col3")) # Joins df1.join(df2, on="col1", how="inner") df1.join(df2, (df1.col1 == df2.col1) & (df1.col2 == df2.col2), how="left") # Distinct/Dedup df.distinct() df.dropDuplicates(["col1", "col2"]) # Sort df.sort("col1", ascending=False) df.orderBy(df.col1.desc()) Performance Optimization Delta Lake Optimization -- Optimize table (compacts small files) OPTIMIZE my_table; OPTIMIZE my_table ZORDER BY col1, col2; -- Check table stats ANALYZE TABLE my_table COMPUTE STATISTICS; ANALYZE TABLE my_table COMPUTE STATISTICS FOR COLUMNS col1, col2; -- View statistics DESCRIBE EXTENDED my_table; Partitioning Strategy # Write with partitioning df.write \ .mode("overwrite") \ .partitionBy("date", "region") \ .format("delta") \ .save("/path/to/table") # Partition pruning (applied automatically) # SELECT * FROM table WHERE date = '2024-01-01' AND region = 'US' Query Performance # Enable adaptive query execution spark.conf.set("spark.sql.adaptive.enabled", "true") # Enable vectorized execution spark.conf.set("spark.sql.execution.arrow.enabled", "true") # Set shuffle partitions spark.conf.set("spark.sql.shuffle.partitions", "200") # Monitor query plans df.explain(mode="extended") Delta Lake Statements DESCRIBE HISTORY test; DESCRIBE HISTORY test LIMIT 1; INSERT INTO test SELECT * FROM test@v2 WHERE id = 3; OPTIMIZE test; OPTIMIZE test ZORDER BY col1; RESTORE TABLE test TO VERSION AS OF 0; SELECT * FROM test TIMESTAMP AS OF '2024-01-01T00:00:00.000Z'; SELECT * FROM test VERSION AS OF 2; SELECT * FROM test@v2; VACUUM test; VACUUM test RETAIN 240 HOURS; %fs ls dbfs:/user/hive/warehouse/test/_delta_log %python spark.conf.set("spark.databricks.delta.retentionDurationCheck.enabled", "false") Delta Live Table Statements CREATE OR REFRESH LIVE TABLE test_raw AS SELECT * FROM json.`/repo/data/test.json`; CREATE OR REFRESH STREAMING TABLE test AS SELECT * FROM STREAM read_files('/repo/data/test*.json'); CREATE OR REFRESH LIVE TABLE test_cleaned AS SELECT col1, col2, col3, col4 FROM live.test_raw; CREATE OR REFRESH LIVE TABLE recent_test AS SELECT col1, col2 FROM live.test2 ORDER BY creation_time DESC LIMIT 10; Functions CREATE OR REPLACE FUNCTION test_function(temp DOUBLE) RETURNS DOUBLE RETURN (col1 - 10); CREATE OR REPLACE FUNCTION add_numbers(a INT, b INT) RETURNS INT RETURN a + b; Useful dbutils Functions File System Operations # List files dbutils.fs.ls("dbfs:/path") dbutils.fs.ls("/Volumes/catalog/schema/volume") # Get file info dbutils.fs.getStatus("dbfs:/path/file.txt") # Move/Rename dbutils.fs.mv("dbfs:/old/path", "dbfs:/new/path") # Remove files dbutils.fs.rm("dbfs:/path", recurse=True) # Create directory dbutils.fs.mkdirs("dbfs:/new/directory") # Copy files dbutils.fs.cp("dbfs:/source", "dbfs:/dest", recurse=True) # Head (preview file) dbutils.fs.head("dbfs:/path/file.txt", 1000) Notebook Context # Get notebook path dbutils.notebook.entry_point.getDbutils().notebook().getContext().notebookPath().get() # Get current user dbutils.notebook.entry_point.getDbutils().notebook().getContext().userName().get() # Exit notebook dbutils.notebook.exit("Exit message") # Run another notebook dbutils.notebook.run("./other_notebook", timeout_seconds=3600, arguments={"param1": "value1"}) Auto Loader %python spark.readStream.format("cloudFiles")\ .option("cloudFiles.format", "json")\ .option("cloudFiles.schemaLocation", "/autoloader-schema")\ .option("pathGlobFilter", "test*.json")\ .load("/repo/data")\ .writeStream\ .option("mergeSchema", "true")\ .option("checkpointLocation", "/autoloader-checkpoint")\ .start("demo") %fs head /autoloader-schema/_schemas/0 CREATE OR REFRESH STREAMING TABLE test AS SELECT * FROM cloud_files( '/repo/data', 'json', map("cloudFiles.inferColumnTypes", "true", "pathGlobFilter", "test*.json") ); CONSTRAINT positive_timestamp EXPECT (creation_time > 0) CONSTRAINT positive_timestamp EXPECT (creation_time > 0) ON VIOLATION DROP ROW CONSTRAINT positive_timestamp EXPECT (creation_time > 0) ON VIOLATION FAIL UPDATE CDC Statements APPLY CHANGES INTO live.target FROM stream(live.cdc_source) KEYS (col1) APPLY AS DELETE WHEN col2 = "DELETE" SEQUENCE BY col3 COLUMNS * EXCEPT (col); Security Statements GRANT <privilege> ON <object_type> <object_name> TO <user_or_group>; GRANT SELECT ON TABLE test TO `databricks@degols.net`; REVOKE <privilege> ON <object_type> <object_name> FROM `test@gmail.com`; -- UC Specific GRANT USAGE ON CATALOG my_catalog TO `user@company.com`; GRANT CREATE ON SCHEMA my_catalog.my_schema TO `team@company.com`; GRANT READ_VOLUME ON VOLUME my_catalog.my_schema.my_volume TO `user@company.com`; GRANT WRITE_VOLUME ON VOLUME my_catalog.my_schema.my_volume TO `user@company.com`; Jobs and Workflows # List running jobs %jobs # Submit job via API from databricks.sdk import WorkspaceClient w = WorkspaceClient() job = w.jobs.create( name="my_job", tasks=[{ "task_key": "task1", "notebook_task": {"notebook_path": "/Users/me/notebook"}, "new_cluster": {"spark_version": "14.3.x-scala2.12", "num_workers": 2, "node_type_id": "i3.xlarge"} }] ) Links Official Databricks Documentation ...

April 4, 2026 · 8 min · James M

Native Instruments: From Preliminary Insolvency to M&A - What Comes Next

When Native Instruments entered preliminary insolvency in late January, it felt like a seismic moment. Two months later, the picture has gotten clearer—and in some ways, more complex. The company has now moved into formal insolvency proceedings, and simultaneously revealed it’s in active merger and acquisition talks with multiple interested buyers. This isn’t a bankruptcy death spiral; it’s a controlled restructuring. But it raises harder questions about what went wrong, and what salvation might actually look like. ...

April 4, 2026 · 6 min · James M

Taste Is the New Scarcity

If intelligence is becoming a commodity, then something else becomes precious. When you can generate a thousand solutions to a problem with a prompt, the question is no longer “can I get an answer?” The question becomes “which answer is good?” When you can write code, design systems, draft strategies, analyze data, or explore ideas simply by asking, the bottleneck shifts. It is no longer thinking capacity. It is judgment. ...

April 4, 2026 · 5 min · James M

Polkadot 2026: From Infrastructure to Applications

The Pivot Year: Polkadot’s Strategic Shift in 2026 Polkadot has undergone a fundamental transformation in 2025-2026. After years of building infrastructure layers, the ecosystem is making a decisive pivot toward user-facing applications. This isn’t just a narrative shift—it’s embedded in technical upgrades, tokenomics redesigns, and validator economics that reflect a maturing network ready to compete at the application layer. Technical Foundation: The Three Pillars Come of Age The completion of Polkadot 2.0’s core technical pillars in 2025 wasn’t ceremonial. Asynchronous Backing, Agile Coretime, and Elastic Scaling have moved from whitepapers to live implementations, fundamentally changing how the network operates. ...

April 4, 2026 · 4 min · James M

NASA Artemis II Tracking Dashboards

About NASA’s Artemis II mission represents a critical step in returning humans to the Moon. Real-time tracking dashboards provide the public with live updates on mission status, vehicle telemetry, and launch preparations. These dashboards showcase NASA’s commitment to transparency, allowing space enthusiasts and stakeholders to monitor every aspect of the mission as it unfolds. Official Resources Artemis II - NASA.gov — Official NASA information and resources for the Artemis II mission. ...

April 4, 2026 · 1 min · James M