Apple finally allows game emulators in the App Store

Finally gaming emulators are now permitted on Apple devices. I’ve just downloaded the Delta app for iPhone and it’s awesome, runs really smoothly Apple App Store: Delta - Game Emulator Links & further details PlayStation, GameCube, Wii, and SEGA Emulator for iPhone and Apple TV Coming to App Store What to Know About Apple Allowing Game Emulators in the App Store NES Emulator for iPhone and iPad Now Available on App Store Delta Game Emulator Now Available From App Store on iPhone The first gaming emulators are now on the iOS App Store

Music Production: Analogue vs Digital Synths

Overview Analog & digital synthesizers are both electronic instruments that create sound, but they differ in how they generate and manipulate that sound. Here’s a list of the key differences. Analog Synths Use electrical circuits to produce sound waves These circuits can be influenced by various knobs and sliders, creating a warm, rich sound with subtle imperfections Pros Warm, organic sound, sought-after for genres like classic electronic music, “hands-on” approach to sound design Cons Can be expensive, limited in features compared to digital synths, less polyphony Digital Synths ...

OpenAI Voice Engine

About OpenAI’s Voice Engine is a text-to-speech tool which can create realistic voices from just a 15-second audio sample. It is notable that a small model with a single 15-second sample can create emotive and realistic voices. To ensure responsible use testers must get clear consent from voice providers, avoid creating user-generated voices, and inform listeners that the voices are AI-generated. Links https://openai.com/blog/navigating-the-challenges-and-opportunities-of-synthetic-voices YouTube OpenAI Introducing: A New Era of Human-like AI Voices

Microsoft Stargate

About Microsoft Stargate is an ambitious project in collaboration with OpenAI to build a massive AI supercomputer. Stargate is envisioned as a giant data center housing a powerful AI supercomputer. It is estimated to cost over $100 billion, construction is expected to begin by 2028 and take several years to complete. The purpose of Stargate is to provide the computing power needed to develop next-generation artificial intelligence models. Links https://gizmodo.com/microsoft-building-stargate-transport-openai-future-1851375210 YouTube Microsoft and OpenAI to collaborate for $100 billion data centre project | Republic Business

Google Gemini Ultra

About Google Gemini Ultra is the top-of-the-line offering within the Gemini family of large language models, developed by Google DeepMind [1]. It stands out for its superior capabilities compared to the base Gemini and the advanced version [2]. Here’s what makes it special: Unmatched Performance State-of-the-Art: Outperforms previous models on various benchmarks, including tasks related to text, code, and more [3]. MMLU Champion: Achieves a score of 90.0% on the Massive Multitask Language Understanding (MMLU) benchmark, surpassing human experts for the first time [2]. MMLU tests a broad range of subjects, demanding both knowledge and problem-solving skills. Advanced Features ...

Google Gemini Advanced

About Google Gemini Advanced is an upgrade to the base Gemini large language model, offering several enhanced capabilities. Advanced Capabilities Multimodal Reasoning: Can analyze and understand complex information combining text and visuals. Coding Expertise: Understands, explains, and generates code in various programming languages. Creative Collaboration: Helps brainstorm ideas and generate creative text formats for digital content. Enhanced User Experience Extended Conversations: Facilitates longer and more detailed interactions compared to the base model. Contextual Awareness: Better grasps the context of your conversation, allowing for tailored responses. Advanced Learning: Acts as a personal tutor, creating customized learning materials and engaging in discussions. Coding Assistance: Assists with intricate coding tasks, suggesting solutions and evaluating different approaches. Accessibility ...

Polkadot News 2024

2024-03-19 Empowering Next-Level Insights: Dune Brings Polkadot and Kusama Analytics into Focus https://polkadot.network/blog/polkadot-kusama-analytics-dune/ 2024-03-18 Polkadot Blockchain Academy Adds Remote Option for Select Students https://polkadot.network/blog/polkadot-blockchain-academy-remote-learning-developers/ 2024-03-04 The Polkadot Alpha Program: A New Era of Collaborative Building https://polkadot.network/blog/the-polkadot-alpha-program-a-new-era-for-decentralized-building-collaboration/ 2024-02-18 Gavin Wood Was Right About Polkadot DOT…. 2024-01-25 Polkadot 2.0 Review | The Biggest Protocol Upgrade Yet? 2024-01-24 Polkadot Blockchain Academy: Targeted Education for Builders and Founders https://polkadot.network/blog/polkadot-blockchain-academy-meeting-the-needs-of-builders-and-founders-alike/ 2024-01-17 Polkadot Review 2024: DOT Updates You Need to Know! https://www.coinbureau.com/review/polkadot-dot/ 2024-01-12 Polkadot Showcases Industry-Leading Scalability in Positive End to 2023 https://polkadot.network/blog/polkadot_q4_update_data/

Music Production: Ableton Live 12

Overview Ableton have just release Ableton Live 12 https://www.ableton.com/en/shop/live-12/ Whats New UI Improvements Improved layout Filter based on tags in Live’s Browser Improved Workflow New Devices Meld (Synth) Granulator III (Sampler) Roar (Saturation Effect) New MIDI Tools Reshape MIDI patterns Generate new ideas Join, split & chop notes Stay in key across devices Additional Features Uncover new textures Performance Pack Lost and Found Pricing Live 12 Intro - £69 Essentials – 16 tracks and 5+ GB of sounds Live 12 Standard - £259 Full features – 38+ GB of sounds and more instruments and effects Live 12 Suite - £539 Complete studio – 71+ GB of sounds, Max for Live and all instruments and effects YouTube Playlists Learn Live 12 YoutTube Videos Ableton Live 12: Explore what’s new BIG UPDATE: Ableton Live 12 preview - Our top 5 favorite features Ableton Live 12: Yes or no? Ableton Live 12 - First Look - Sonic LAB Presentation

Databricks Training & Certification

Coursera Databricks Databricks Databricks Training & Certification Learn Learning Library DataCamp A Comprehensive Guide to Databricks Lakehouse AI For Data Scientists Databricks Tutorial: 7 Must-know Concepts For Any Data Specialist Introduction to Databricks edX Databricks Udemy Databricks Certified Data Engineer Associate Databricks Certified Data Engineer Professional Whizlabs Practice Tests Databricks Certified Associate Developer for Apache Spark (Python) Databricks Certified Data Analyst Associate Certification Databricks Certified Data Engineer Associate Certification Databricks Certified Data Engineer Professional Certification Databricks Certified Machine Learning Associate Certification Databricks Certified Machine Learning Professional Certification

Databricks Cheat Sheets

Databricks Notebook Commands Command Purpose Example %config Set configuration options for the notebook %env Set environment variables %fs Interact with the Databricks file system %fs ls dbfs:/repo %load Loads the contents of a file into a cell %lsmagic List all magic commands %jobs Lists all running jobs %matplotlib sets up the matplotlib backend %md Write Markdown text %pip Install Python packages %python Executes python code %python dbutils.fs.rm("/user/hive/warehouse/test/", True) %r Execute R code %reload reloads module contents %run Executes a Python file or a notebook %scala Executes scala code %sh Executes shell commands on the cluster nodes %sh git clone https://github.com/repo/test %sql Executes SQL queries %who Lists all the variables in the current scope Accessing Files /path/to/file dbfs:/path/to/file file:/path/to/file s3://path/to/file Copying Files %fs cp file:/<path> /Volumes/<catalog>/<schema>/<volume>/<path> %python dbutils.fs.cp("file:/<path>", "/Volumes/<catalog>/<schema>/<volume>/<path>") %python dbutils.fs.cp("file:/databricks/driver/test", "dbfs:/repo", True) %sh cp /<path> /Volumes/<catalog>/<schema>/<volume>/<path> SQL Statements (DDL) Create & Use Schema CREATE SCHEMA test; CREATE SCHEMA custom LOCATION 'dbfs:/custom'; USE SCHEMA test; Create Table CREATE TABLE test(col1 INT, col2 STRING, col3 STRING, col4 BIGINT, col5 INT, col6 FLOAT); CREATE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE TABLE test USING CSV LOCATION '/repo/data/test.csv'; CREATE TABLE test USING CSV OPTIONS (header="true") LOCATION '/repo/data/test.csv'; CREATE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE TABLE test AS ... CREATE TABLE test USING ... CREATE TABLE test(id INT, title STRING, col1 STRING, publish_time BIGINT, pages INT, price FLOAT) COMMENT 'This is comment for the table itself'; CREATE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.json', format => 'json'); CREATE TABLE test_raw AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv', sep => ';'); CREATE TABLE custom_table_test LOCATION 'dbfs:/custom-table' AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE TABLE test PARTITIONED BY (col1) AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv') CREATE TABLE users( firstname STRING, lastname STRING, full_name STRING GENERATED ALWAYS AS (concat(firstname, ' ', lastname)) ); CREATE OR REPLACE TABLE test AS SELECT * EXCEPT (_rescued_data) FROM read_files('/repo/data/test.csv'); CREATE OR REPLACE TABLE test AS SELECT * FROM json.`/repo/data/test.json`; CREATE OR REPLACE TABLE test AS SELECT * FROM read_files('/repo/data/test.csv'); Create View CREATE VIEW view_test AS SELECT * FROM test WHERE col1 = 'test'; CREATE VIEW view_test AS SELECT col1, col1 FROM test JOIN test2 ON test.col2 == test2.col2; CREATE TEMP VIEW temp_test AS SELECT * FROM test WHERE col1 = 'test'; CREATE TEMP VIEW temp_test AS SELECT * FROM read_files('/repo/data/test.csv'); CREATE GLOBAL TEMP VIEW view_test AS SELECT * FROM test WHERE col1 = 'test'; SELECT * FROM global_temp.view_test; CREATE TEMP VIEW jdbc_example USING JDBC OPTIONS ( url "<jdbc-url>", dbtable "<table-name>", user '<username>', password '<password>'); CREATE OR REPLACE TEMP VIEW test AS SELECT * FROM delta.`<logpath>`; CREATE VIEW event_log_raw AS SELECT * FROM event_log("<pipeline-id>"); CREATE OR REPLACE TEMP VIEW test_view AS SELECT test.col1 AS col1 FROM test_table WHERE col1 = 'value1' ORDER BY timestamp DESC LIMIT 1; Drop DROP TABLE test; Describe SHOW TABLES; DESCRIBE EXTENDED test; SQL Statements (DML) Select SELECT * FROM csv.`/repo/data/test.csv`; SELECT * FROM read_files('/repo/data/test.csv'); SELECT * FROM read_files('/repo/data/test.csv', format => 'csv', header => 'true', sep => ',') SELECT * FROM json.`/repo/data/test.json`; SELECT * FROM json.`/repo/data/*.json`; SELECT * FROM test WHERE year(from_unixtime(test_time)) > 1900; SELECT * FROM test WHERE title LIKE '%a%' SELECT * FROM test WHERE title LIKE 'a%' SELECT * FROM test WHERE title LIKE '%a' SELECT * FROM test TIMESTAMP AS OF '2024-01-01T00:00:00.000Z'; SELECT * FROM test VERSION AS OF 2; SELECT * FROM test@v2; SELECT * FROM event_log("<pipeline-id>"); SELECT count(*) FROM VALUES (NULL), (10), (10) AS example(col); SELECT count(col) FROM VALUES (NULL), (10), (10) AS example(col); SELECT count_if(col1 = 'test') FROM test; SELECT from_unixtime(test_time) FROM test; SELECT cast(test_time / 1 AS timestamp) FROM test; SELECT cast(cast(test_time AS BIGINT) AS timestamp) FROM test; SELECT element.sub_element FROM test; SELECT flatten(array(array(1, 2), array(3, 4))); SELECT * FROM ( SELECT col1, col2 FROM test ) PIVOT ( sum(col1) for col2 in ('item1','item2') ); SELECT *, CASE WHEN col1 > 10 THEN 'value1' ELSE 'value2' END FROM test; SELECT * FROM test ORDER BY (CASE WHEN col1 > 10 THEN col2 ELSE col3 END); WITH t(col1, col2) AS (SELECT 1, 2) SELECT * FROM t WHERE col1 = 1; SELECT details:flow_definition.output_dataset as output_dataset, details:flow_definition.input_datasets as input_dataset FROM event_log_raw, latest_update WHERE event_type = 'flow_definition' AND origin.update_id = latest_update.id; Insert INSERT OVERWRITE test SELECT * FROM read_files('/repo/data/test.csv'); INSERT INTO test(col1, col2) VALUES ('value1', 'value2'); Merge Into MERGE INTO test USING test_to_delete ON test.col1 = test_to_delete.col1 WHEN MATCHED THEN DELETE; MERGE INTO test USING test_to_update ON test.col1 = test_to_update.col1 WHEN MATCHED THEN UPDATE SET *; MERGE INTO test USING test_to_insert ON test.col1 = test_to_insert.col1 WHEN NOT MATCHED THEN INSERT *; Copy Into COPY INTO test FROM '/repo/data' FILEFORMAT = CSV FILES = ('test.csv') FORMAT_OPTIONS('header' = 'true', 'inferSchema' = 'true'); Delta Lake Statements DESCRIBE HISTORY test; DESCRIBE HISTORY test LIMIT 1; INSERT INTO test SELECT * FROM test@v2 WHERE id = 3; OPTIMIZE test; OPTIMIZE test ZORDER BY col1; RESTORE TABLE test TO VERSION AS OF 0; SELECT * FROM test TIMESTAMP AS OF '2024-01-01T00:00:00.000Z'; SELECT * FROM test VERSION AS OF 2; SELECT * FROM test@v2; VACUUM test; VACUUM test RETAIN 240 HOURS; %fs ls dbfs:/user/hive/warehouse/test/_delta_log %python spark.conf.set("spark.databricks.delta.retentionDurationCheck.enabled", "false") Delta Live Table Statements CREATE OR REFRESH LIVE TABLE test_raw AS SELECT * FROM json.`/repo/data/test.json`; CREATE OR REFRESH STREAMING TABLE test AS SELECT * FROM STREAM read_files('/repo/data/test*.json'); CREATE OR REFRESH LIVE TABLE test_cleaned AS SELECT col1, col2, col3, col4 FROM live.test_raw; CREATE OR REFRESH LIVE TABLE recent_test AS SELECT col1, col2 FROM live.test2 ORDER BY creation_time DESC LIMIT 10; Fuctions CREATE OR REPLACE FUNCTION test_function(temp DOUBLE) RETURNS DOUBLE RETURN (col1 - 10); Auto Loader %python spark.readStream.format("cloudFiles")\ .option("cloudFiles.format", "json")\ .option("cloudFiles.schemaLocation", "/autoloader-schema")\ .option("pathGlobFilter", "test*.json")\ .load("/repo/data")\ .writeStream\ .option("mergeSchema", "true")\ .option("checkpointLocation", "/autoloader-checkpoint")\ .start("demo") %fs head /autoloader-schema/_schemas/0 CREATE OR REFRESH STREAMING TABLE test AS SELECT * FROM cloud_files( '/repo/data', 'json', map("cloudFiles.inferColumnTypes", "true", "pathGlobFilter", "test*.json") ); CONSTRAINT positive_timestamp EXPECT (creation_time > 0) CONSTRAINT positive_timestamp EXPECT (creation_time > 0) ON VIOLATION DROP ROW CONSTRAINT positive_timestamp EXPECT (creation_time > 0) ON VIOLATION FAIL UPDATE CDC Statements APPLY CHANGES INTO live.target FROM stream(live.cdc_source) KEYS (col1) APPLY AS DELETE WHEN col2 = "DELETE" SEQUENCE BY col3 COLUMNS * EXCEPT (col); Security Statements GRANT <privilege> ON <object_type> <object_name> TO <user_or_group>; GRANT SELECT ON TABLE test TO `databricks@degols.net`; REVOKE <privilege> ON <object_type> <object_name> FROM `test@gmail.com'; Links Databricks SQL Language Reference Cheat Sheets Compute creation cheat sheet Platform administration cheat sheet Production job scheduling cheat sheet Best Practices Delta Lake best practices Hyperparameter tuning with Hyperopt Deep learning in Databricks Recommendations for MLOps Unity Catalog best practices Cluster configuration best practices Instance pool configuration best practices Other Databricks Cheat Sheet 1 Databricks Notebook Markdown Cheat Sheet