Sora from OpenAI

OpenAI has introduced Sora, an innovative generative AI model designed to transform text into video content. According to OpenAI, Sora can produce 1080p movie-like scenes featuring multiple characters, various types of motion, and detailed background elements, based on either brief or detailed text descriptions or still images. Additionally, Sora has the capability to “extend” existing video clips by filling in missing details.

The model’s proficiency lies in its deep understanding of language, enabling it to interpret prompts accurately and generate dynamic characters that convey vivid emotions. OpenAI emphasizes Sora’s comprehension not only of the user’s requests but also of how those elements manifest in the physical world.

Video Credit : OpenAI

Despite the lofty claims made by OpenAI, the showcased samples from Sora demonstrate impressive capabilities compared to other text-to-video technologies. Sora can produce videos up to a minute long in various styles, such as photorealistic, animated, or black and white, maintaining coherence and avoiding common pitfalls associated with AI-generated content.

However, Sora is not without flaws, as acknowledged by OpenAI. The model may struggle with accurately simulating complex scenes’ physics, understanding cause and effect relationships, or maintaining spatial and temporal consistency. OpenAI positions Sora as a research preview, refraining from making it generally available due to concerns about potential misuse.

OpenAI is actively collaborating with experts to identify and address potential vulnerabilities in the model and is developing tools to detect videos generated by Sora. Should OpenAI decide to make Sora publicly accessible, it pledges to include provenance metadata in generated outputs to mitigate misuse risks.

Yuuma



アプリ関連ニュース

お問い合わせはこちら

お問い合わせ・ご相談はお電話、またはお問い合わせフォームよりお受け付けいたしております。

tel. 06-6454-8833(平日 10:00~17:00)

お問い合わせフォーム