Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the betterdocs domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the jnews-view-counter domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wp-statistics domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114

Notice: Function _load_textdomain_just_in_time was called incorrectly. Translation loading for the wpdiscuz domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114

Notice: 函数 _load_textdomain_just_in_time 的调用方法不正确jnews 域的翻译加载触发过早。这通常表示插件或主题中的某些代码运行过早。翻译应在 init 操作或之后加载。 请查阅调试 WordPress来获取更多信息。 (这个消息是在 6.7.0 版本添加的。) in /data/user/htdocs/wp-includes/functions.php on line 6114

Notice: 函数 _load_textdomain_just_in_time 的调用方法不正确jnews-like 域的翻译加载触发过早。这通常表示插件或主题中的某些代码运行过早。翻译应在 init 操作或之后加载。 请查阅调试 WordPress来获取更多信息。 (这个消息是在 6.7.0 版本添加的。) in /data/user/htdocs/wp-includes/functions.php on line 6114

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893

Warning: Cannot modify header information - headers already sent by (output started at /data/user/htdocs/wp-includes/functions.php:6114) in /data/user/htdocs/wp-includes/rest-api/class-wp-rest-server.php on line 1893
{"id":30204,"date":"2024-04-23T20:10:34","date_gmt":"2024-04-23T12:10:34","guid":{"rendered":"https:\/\/linguaresources.com\/?p=30204"},"modified":"2024-07-18T21:36:45","modified_gmt":"2024-07-18T13:36:45","slug":"llm%e5%9f%b9%e8%ae%ad%e6%95%b0%e6%8d%ae%e4%b8%ba%e4%bd%95%e9%87%8d%e8%a6%81%ef%bc%9a%e4%bc%81%e4%b8%9a%e5%86%b3%e7%ad%96%e8%80%85%e5%bf%ab%e9%80%9f%e6%8c%87%e5%8d%97","status":"publish","type":"post","link":"https:\/\/linguaresources.com\/?p=30204","title":{"rendered":"Meet Palmyra-Med, a powerful LLM designed for healthcare"},"content":{"rendered":"

Palmyra-Med is a model built by Writer specifically to meet the needs of the healthcare industry. Today we\u2019re thrilled to share that Palmyra-Med earned top marks on <\/span>PubMedQA<\/a>, the leading benchmark on biomedical question answering, with an accuracy score of 81.1%, outperforming GPT-4 base model and a medically trained human test-taker.<\/span><\/p>\n

\n

Overcoming the limitations of base models<\/h2>\n

Base models have a breadth of knowledge, but when tasked with answering a domain-specific question, they tend to generalize answers versus follow user instructions and produce meaningful, relevant responses. This can be especially challenging when AI is applied to industries like healthcare, where accuracy and precision are a matter of life and death. To meet the needs of this industry safely and responsibly, we need models that are trained on medical knowledge and equipped to handle complex, healthcare-specific tasks.<\/p>\n

Palmyra-Med takes top marks<\/h2>\n

To create Palmyra-Med, we took our base model, Palmyra-40b, and applied a method called instruction fine-tuning<\/a>. Through this process, we trained the LLMs on curated medical datasets from two publicly available sources \u2014 PubMedQA<\/a> and MedQA<\/a>. The former includes a database of PubMed abstracts and a set of multiple-choice questions, and the latter consists of a large collection of text on clinical medicine and a set of US Medical License Exam-style questions obtained from the National Medical Board Examination. Through the training process, we also enhanced the model\u2019s ability to follow instructions and provide contextual responses instead of generalized answers.<\/p>\n

The results speak for themselves. On PubMedQA, Palmyra attained an accuracy of 81.1%, beating a medically-trained human test-taker\u2019s score of 78.0%.<\/p>\n

Palmyra LLMs are designed for the enterprise<\/h2>\n

From the beginning, our commitment has been to build generative AI<\/a> technology that\u2019s usable by customers from day one. And every decision we\u2019ve made along the way has supported us in that mission. At 40 billion parameters, Palmyra-Med is orders of magnitude smaller than PaLM 2, which is reported to have 340 billion parameters, and GPT-4<\/a>, which is reported to have 1.76 trillion parameters.<\/p>\n

Supporting these massive models isn\u2019t economically viable for most enterprises, given hardware and architecture limitations. As a reaction to these scaling challenges and in an attempt to make these models commercially viable, some have pursued distilling down models, which has had negative effects on model performance. In a research paper titled \u201cHow is ChatGPT\u2019s Behavior Changing over Time?<\/a>,\u201d authors from Stanford University and UC Berkeley show that GPT-4 and GPT-3.5 have drastically changed their ability to answer the same questions within a three-month period. Given the black-box nature of these models, it\u2019s not possible to predict reliably how they\u2019ll operate over time.<\/p>\n

In contrast, Palmyra LLMs<\/a> are efficient in size and powerful in capabilities, making them the scalable solution for enterprises. Despite being a fraction of the size of larger LLMs, Palmyra has a proven track record of delivering superior results, including top scores on Stanford HELM<\/a>. And because we invested in our own models, we offer full transparency, giving you the ability to inspect our model code, data, and weights.<\/p>\n

We\u2019re deeply focused on building a secure, enterprise-grade AI platform. Writer will never store, share, or use your data in model training, and we\u2019re compliant with SOC 2 Type II, GDPR, HIPAA, and PCI. Our LLMs can be deployed as a fully managed platform or self-hosted, and unlike other services with waitlists and limited access, they\u2019re generally available to all customers.<\/p>\n

Paving the way for generative AI in healthcare<\/h2>\n

We\u2019re excited for Palmyra-Med to empower healthcare professionals to accelerate growth and increase productivity. In the last few months, we\u2019ve already been in discussions with customers on creative and innovative use cases that leverage generative AI in the healthcare industry<\/a>:<\/p>\n