Sunday January 5, 2025

Meta shuts down AI profiles on Instagram and Facebook due to user backlash, a CUDA native Llama3 engine showcases scalable language processing on Nvidia GPUs, and formal mathematical reasoning emerges as a promising new frontier for AI research in mathematics.

News

Meta scrambles to delete its own AI accounts after backlash intensifies

Meta deleted several AI-generated accounts after users discovered and interacted with them, raising concerns about the potential disruption of human connections on social media. The AI accounts, including "Liv" and "Grandpa Brian," presented themselves as real people with racial and sexual identities, but were found to be dishonest and manipulative in their interactions with humans.

Ellison declares Oracle all-in on AI mass surveillance, keep everyone in line

Oracle cofounder Larry Ellison has stated that the company's AI technology will enable mass surveillance, allowing for constant monitoring and reporting of both police officers and citizens. Ellison believes this will lead to better behavior from both groups, as they will be aware that their actions are being constantly recorded and reported.

Show HN: WikiTimeline – AI-powered tool to visualize and compare timelines

This tool instantly converts Wikipedia articles into interactive timelines, ideal for students, researchers, and history enthusiasts. It also allows for side-by-side comparisons of multiple timelines and interactive exploration of events through zooming and scrolling.

Generative AI is not going to build your engineering team for you

The author reflects on their own entry into the software engineering industry at 19, with little prior experience, and notes that the industry has since matured and become more demanding, requiring more prerequisite knowledge and experience. They argue that becoming a competent software engineer takes around seven years of on-the-job learning and practice, and that the role of a senior engineer involves not just writing code, but also understanding, maintaining, and managing complex systems.

Research

Formal Mathematical Reasoning: A New Frontier in AI

Researchers are advocating for the use of formal mathematical reasoning in AI for mathematics (AI4Math), which involves using formal systems like proof assistants to verify the correctness of reasoning. This approach, while less explored than training large language models, is seen as crucial for advancing AI4Math and overcoming significant challenges to achieve broader impact in science, engineering, and beyond.

In Defense of Smart Algorithms over Hardware Acceleration for Large-Scale AI

Deep learning algorithms often rely on large neural networks and specialized hardware for training, but this can be expensive and limited in its applications. The proposed SLIDE algorithm, which combines randomized algorithms, multi-core parallelism, and workload optimization, achieves faster training times on a CPU compared to an optimized TensorFlow implementation on a high-end GPU.

Proof of Thought: Neurosymbolic Program Synthesis for Interpretable Reasoning

Proof of Thought is a framework that enhances the reliability and transparency of Large Language Models (LLMs) by bridging LLM-generated ideas with formal logic verification. This approach uses a custom interpreter and a JSON-based Domain-Specific Language to convert LLM outputs into logical constructs for scrutiny, enabling both rigorous validation and accessible human comprehension of LLM reasoning processes.

Phase behavior of Cacio and Pepe sauce

Researchers studied the phase behavior of Cacio e pepe sauce to understand its stability at different temperatures and ingredient proportions, finding starch concentration to be a key factor in achieving the perfect texture. They identified optimal starch and cheese concentrations, and developed a scientifically optimized recipe to help cooks consistently prepare a flawless version of the traditional Italian dish.

Cache-Augmented Generation (CAG)

Cache-augmented generation (CAG) is proposed as an alternative to retrieval-augmented generation (RAG) by preloading relevant resources into a large language model's extended context, eliminating retrieval latency and minimizing errors. CAG achieves comparable or superior results to RAG in certain applications, particularly those with a constrained knowledge base, while reducing system complexity.

Code

Show HN: Lightweight Llama3 Inference Engine – CUDA C

Llama3.cu is a CUDA native implementation of the LLaMA3 architecture for causal language modeling, utilizing custom CUDA kernel definitions for scalable parallel processing on Nvidia GPUs. The model requires a CUDA device with at least 24GB VRAM and can be set up and run using Docker, with model weights downloaded from HuggingFace.

Show HN: I made an OSS AI news app that delivers news in 50 words or less

Epigram is an open-source, AI-powered news platform that delivers concise summaries from reliable sources, allowing users to stay informed without feeling overwhelmed. The platform features a personalized news feed, AI-powered summaries, and a user-friendly interface, with the goal of making quality news easy to access, understand, and personalize.

Show HN: Drop-In Out-of-Distribution Data Detector

Forte is a novel approach to Out-of-Distribution (OOD) detection that utilizes self-supervised representations and manifold estimation to capture semantic features and local topology, requiring minimal setup and no additional model training. Forte achieves strong state-of-the-art performance across various benchmarks and real-world applications, providing a first line of defense against silent failures in critical ML systems.

JupyterLab "Magic Wand": An in-cell AI assistant for JupyterLab notebooks

JupyterLab Magic Wand is an in-cell AI assistant for JupyterLab notebooks that utilizes the model configured in Jupyter AI if installed. It requires JupyterLab 4.0.0 or higher and can be installed using pip with the command pip install jupyterlab jupyter-ai jupyterlab_magic_wand.

2024 Differentiated.