アプリ関連ニュース
- 2023年11月14日
- 技術情報
MongoDB Takes Control of Laravel Integration: Official Support Announced
MongoDB is now officially in charge of the Laravel framework’s community-driven MongoDB integration! This signals a commitment to regular updates, enhancing functionality, fixing bugs, and ensuring compatibility with the latest releases of both Laravel and MongoDB.

Formerly recognized as jenssegers/laravel-mongodb, this library expands Eloquent, Laravel’s ORM, providing PHP developers working with MongoDB a seamless experience through Eloquent models, query builders, and transactions.
For those eager to integrate MongoDB with the Laravel framework, explore the most recent release of this library, which introduces support for Laravel 10 – Laravel MongoDB 4.0.0. If you’re embarking on MongoDB PHP projects, a tutorial on constructing a Laravel + MongoDB back-end service and comprehensive library documentation are readily available here.
By Asahi
waithaw at 2023年11月14日 10:00:00
- 2023年11月13日
- AI
Samsung Gauss
Samsung has introduced its generative AI model, Samsung Gauss, at the Samsung AI Forum 2023. Comprising three tools—Samsung Gauss Language, Samsung Gauss Code, and Samsung Gauss Image—the model aims to enhance productivity across various applications. Samsung Gauss Language is a large language model similar to ChatGPT, capable of understanding and responding to human language. This tool can assist with tasks such as email writing, document summarization, and language translation. Samsung plans to integrate this language model into its devices like phones and laptops. The availability of the model for interaction in English and Korean remains undisclosed.
The Samsung Gauss Code, designed to work with the code assistant code.i, focuses specifically on development code. It aims to assist developers in writing code swiftly, supporting code description and test case generation through an interactive interface. Samsung Gauss Image, as the name implies, focuses on image generation and editing. It can potentially convert low-resolution images into high-resolution ones. While currently limited to internal use, Samsung plans to release Gauss to the public “in the near future,” with a potential application in the Galaxy S24, based on the generative AI model, expected as early as 2024.

Additionally, Samsung has established an AI Red Team to monitor security and privacy concerns throughout the AI development process, ensuring adherence to ethical principles. The company had previously imposed a temporary ban on generative AI tools, including ChatGPT and Google’s Bard, on its devices following an internal data leak.
Samsung’s commitment to generative AI research collaboration with industry and academia was highlighted by Daehyun Kim, the executive vice president of the Samsung Research Global AI Center, at the AI Forum. The name “Samsung Gauss” pays homage to the mathematician Carl Friedrich Gauss, whose normal distribution theory is considered fundamental to AI and machine learning. You can check out the detail event article from Samsaung here.
Yuuma
yuuma at 2023年11月13日 10:00:00
- 2023年11月01日
- Android
Bliss OS for PCを使ってみました
tanaka at 2023年11月01日 10:00:00
- 2023年10月31日
- AI
Assessing the Impact of Generative AI for evaluating Risks, Ethics Frontiers and Societal Integration
Generative AI systems, which generate content in various formats, are increasingly prevalent across multiple fields like medicine, news, politics, and even providing companionship in social interactions. Initially, these systems primarily produced information in a single format, such as text or graphics, but there is now a notable trend towards enhancing their adaptability to work with additional formats like audio (including voice and music) and video.
The rising usage of generative AI systems underscores the critical need to evaluate potential risks associated with their deployment. As these technologies become more widespread and integrated into diverse applications, concerns about public safety are mounting. Consequently, assessing the potential risks posed by these systems has become a top priority for AI developers, policymakers, regulators, and civil society.
The increasing utilization of these systems underscores the essentiality of evaluating potential dangers linked to the implementation of generative AI systems. Thus, it is becoming increasingly crucial for AI developers, regulators, and civil society to appraise the potential threats these systems might pose. The development of AI that could propagate misinformation raises ethical questions about its societal impact.
In response to these concerns, DeepMind, Google’s AI research lab, has published a paper proposing a framework for assessing the societal and ethical risks associated with AI systems. DeepMind’s proposal emphasizes the necessity for engagement from various stakeholders, including AI developers, app developers, and the general public, in evaluating and auditing AI systems. The research lab underscores the significance of examining AI systems at the “point of human interaction” and understanding their integration into society.
You can checkout the paper here.
Asahi
waithaw at 2023年10月31日 10:00:00
- 2023年10月30日
- AI
OpenAI establishes a team to study “catastrophic” AI risks, including nuclear threats.
OpenAI has recently established a new team called “Preparedness” to address and assess potential catastrophic risks associated with AI models. This initiative is led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, who joined OpenAI in the capacity of “head of Preparedness.” The team’s primary responsibilities encompass monitoring, forecasting, and safeguarding against various risks posed by future AI systems, ranging from their ability to deceive and manipulate humans (as seen in phishing attacks) to their potential for generating malicious code.

Preparedness is tasked with studying a range of risk categories, some of which may appear far-fetched, such as “chemical, biological, radiological, and nuclear” threats in the context of AI models. OpenAI CEO Sam Altman, known for expressing concerns about AI-related doomsday scenarios, is taking a proactive approach in preparing for such risks. The company is open to investigating both obvious and less apparent AI risks and is soliciting ideas from the community for risk studies, offering a $25,000 prize and job opportunities with the Preparedness team to top contributors.
In addition to risk assessment, the Preparedness team will work on formulating a “risk-informed development policy” to guide OpenAI’s approach to AI model evaluations, monitoring, risk mitigation, and governance structure. This approach complements OpenAI’s existing work in AI safety, focusing on both the pre- and post-model deployment phases. OpenAI acknowledges the potential benefits of highly capable AI systems but emphasizes the need to understand and establish infrastructure to ensure their safe use and operation. This announcement coincides with a major U.K. government summit on AI safety and follows OpenAI’s commitment to study and control emerging forms of “superintelligent” AI, driven by concerns about the potential for advanced AI systems to surpass human intelligence within the next decade.
You can read more details here from openAI blog.
Yuuma
yuuma at 2023年10月30日 10:00:00