Jennifer Chu@news.mit.edu
//
References:
learn.aisingapore.org
, news.mit.edu
MIT researchers have recently made significant strides in artificial intelligence, focusing on enhancing robotics, code generation, and system optimization. One project involves a novel robotic system designed to efficiently identify and prioritize objects relevant to assisting humans. By cutting through data noise, the robot can focus on crucial features in a scene, making it ideal for collaborative environments like smart manufacturing and warehouses. This innovative approach could lead to more intuitive and safer robotic helpers in various settings.
Researchers have also developed a new method to improve the accuracy of AI-generated code in any programming language. This approach guides large language models (LLMs) to produce error-free code that adheres to the rules of the specific language being used. By allowing LLMs to focus on outputs most likely to be valid and accurate, while discarding unpromising outputs early on, the system achieves greater computational efficiency. This advancement could help non-experts control AI-generated content and enhance tools for AI-powered data analysis and scientific discovery. A new methodology for optimizing complex coordinated systems has emerged from MIT, utilizing simple diagrams to refine software optimization in deep-learning models. This diagram-based "language," rooted in category theory, simplifies the process of designing computer algorithms that control various system components. By revealing relationships between algorithms and parallelized GPU hardware, this approach makes it easier to optimize resource usage and manage the intricate interactions between different parts of a system, potentially revolutionizing the way complex systems are designed and controlled. Recommended read:
References :
@techxplore.com
//
References:
PCMag Middle East ai
, techxplore.com
Microsoft is making strides in artificial intelligence with the introduction of a new AI model designed to run efficiently on regular CPUs, rather than the more power-hungry GPUs traditionally required. Developed by computer scientists at Microsoft Research, in collaboration with the University of Chinese Academy of Sciences, this innovative model utilizes a 1-bit architecture, processing data using only three values: -1, 0, and 1. This allows for simplified computations that rely on addition and subtraction, significantly reducing memory usage and energy consumption compared to models that use floating-point numbers. Testing has shown that this CPU-based model can compete with and even outperform some GPU-based models in its class, marking a significant step towards more sustainable AI.
Alongside advancements in AI model efficiency, Microsoft is also enhancing user accessibility across its platforms. The company's Dynamics 365 Field Service is receiving a new Exchange Integration feature, designed to seamlessly synchronize work order bookings with Outlook and Teams calendars. This feature allows technicians to view their work assignments, personal appointments, and other work meetings in one centralized location. With a one-way sync from Dynamics 365 to Exchange that takes a maximum of 15 minutes, technicians can operate within Outlook, reducing scheduling confusion and creating a more streamlined workflow. However, the rapid expansion of AI also raises concerns about energy consumption and resource management. OpenAI CEO Sam Altman has revealed that user politeness, specifically the use of "please" and "thank you" when interacting with ChatGPT, is costing the company millions of dollars in electricity. This highlights the immense energy requirements of AI chatbots, which consume significantly more power than traditional Google searches. These insights underscore the importance of developing energy-efficient AI solutions, as well as considering the broader environmental impact of increasingly complex AI systems. Recommended read:
References :
Megan Crouse@techrepublic.com
//
Microsoft has unveiled BitNet b1.58, a groundbreaking language model designed for ultra-efficient operation. Unlike traditional language models that rely on 16- or 32-bit floating-point numbers, BitNet utilizes a mere 1.58 bits per weight. This innovative approach significantly reduces memory requirements and energy consumption, enabling the deployment of powerful AI on devices with limited resources. The model is based on the standard transformer architecture, but incorporates modifications aimed at efficiency, such as BitLinear layers and 8-bit activation functions.
The BitNet b1.58 2B4T model contains two billion parameters and was trained on a massive dataset of four trillion tokens, roughly equivalent to the contents of 33 million books. Despite its reduced precision, BitNet reportedly performs comparably to models that are two to three times larger. In benchmark tests, it outperformed other compact models and performed competitively with significantly larger and less efficient systems. Its memory footprint is just 400MB, making it suitable for deployment on laptops or in cloud environments. Microsoft has released dedicated inference tools for both GPU and CPU execution, including a lightweight C++ version, to facilitate adoption. The model is available on Hugging Face. Future development plans include expanding the model to support longer texts, additional languages, and multimodal inputs such as images. Microsoft is also working on another efficient model family under the Phi series. The company demonstrated that this model can run on a Apple M2 chip. Recommended read:
References :
@betakit.com
//
Shopify CEO Tobi Lütke has mandated that employees must now justify why AI cannot perform a task before requesting additional resources or new hires. This directive, outlined in an internal memo, signifies a pivotal shift toward integrating AI into every facet of the company’s operations. Lütke emphasizes that using AI effectively is now a "fundamental expectation" for all Shopify employees, viewing it as an essential skill that will only grow in importance. The memo encourages a "reflexive AI usage strategy," urging employees to explore how AI can enhance their workflows and boost productivity.
The new policy includes specific requirements for new projects, particularly during the prototyping phase. Shopify expects AI exploration to "dominate" the prototyping process. Furthermore, the company will incorporate AI usage questions into performance and peer review questionnaires. Teams must demonstrate why they cannot accomplish their goals using AI before seeking more headcount or resources. Lütke believes that AI has the potential to increase productivity exponentially, potentially achieving "100X the work done." Lütke's memo serves as an inspiration for other workplaces to consider integrating AI. He highlights the transformational qualities of AI as a productivity multiplier and encourages employees to "tinker" with AI to discover its full potential. The Shopify CEO also expressed in the memo that some employees had shown an improvement of "10X" what was previously thought possible, and the hope is that AI tools can now provide "10X" themselves. The changes mark a bold step toward an AI-native future for Shopify and a potential blueprint for businesses worldwide. Recommended read:
References :
|
BenchmarksBlogsResearch Tools |