Classic linear programming (LP) problems have been a cornerstone of mathematical programming since the late 1940s, with the simplex method by Dantzig and interior-point methods being the most prevalent techniques. Today's advanced commercial LP solvers utilize these methods but face challenges in scaling to very large problems.
Source: https://blog.research.google/
AIGoogle | Rating: 62 | 2024-09-20 08:30:39 PM |
By 2050, the world's urban population is expected to increase by 2.5 billion, with 90% of that growth occurring in cities across Asia and Africa. To address this, the Open Buildings dataset was launched in 2021, mapping buildings in Africa, and later expanded to Latin America, the Caribbean, and South and Southeast Asia.
Source: https://blog.research.google/
AIGoogle | Rating: 42 | 2024-09-19 01:10:47 PM |
Researchers are developing automatic animal species identification tools to protect animals in remote environments. These tools use large datasets from recorded soundscapes to classify vocalizations from various species. While models can classify thousands of bird vocalizations, identifying vocalizations from multiple whale species has proven challenging due to the broad acoustic range of whale species, ranging from 10 Hz to 120 kHz.
Source: https://blog.research.google/
AIGoogle | Rating: 57 | 2024-09-18 03:30:33 PM |
A semi-automatic pipeline is proposed to generate candidate multiple choice questions for long video datasets. The pipeline aims to solve the challenge of building long video datasets, which requires significant manual effort to select, watch, understand, and annotate videos. The task involves answering challenging questions about longer videos, which may involve listening to the audio track and rewinding to rewatch key parts.
Source: https://blog.research.google/
AIGoogle | Rating: 58 | 2024-09-16 10:30:33 PM |
Large Language Models (LLMs) struggle to ground their responses in verifiable facts, leading to hallucinations. To address this, researchers have developed DataGemma, an experimental set of open models that help integrate real-world knowledge from various sources. This aims to build responsible and trustworthy AI systems.
Source: https://blog.research.google/
AIGoogle | Duplicated with: | Rating: 61 | 2024-09-12 03:50:37 PM |
Our method for deployment testing consists of two key steps: generative data augmentation and contrastive tuning. The process involves altering ground-truth objects and object-related attributes to introduce hallucinated concepts, and then calculating a contrastive loss between factual and hallucinated tokens. The goal is to minimize the likelihood of generating hallucinated tokens and maximize the likelihood of generating factual tokens.
Source: https://blog.research.google/
AIGoogle | Rating: 61 | 2024-09-04 10:47:54 PM |
Google AI Edge's MediaPipe launched an experimental cross-platform large language model inference, allowing users to run models fully on-device, eliminating server costs, ensuring user privacy, and enabling offline usage. The models have billions of parameters and sizes measured in gigabytes, which can overload memory and compute capabilities.
Source: https://blog.research.google/
AIGoogle | Rating: 62 | 2024-08-22 08:37:52 PM |