betterdocs
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114jnews-view-counter
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114wp-statistics
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114wpdiscuz
domain was triggered too early. This is usually an indicator for some code in the plugin or theme running too early. Translations should be loaded at the init
action or later. Please see Debugging in WordPress for more information. (This message was added in version 6.7.0.) in /data/user/htdocs/wp-includes/functions.php on line 6114jnews
域的翻译加载触发过早。这通常表示插件或主题中的某些代码运行过早。翻译应在 init
操作或之后加载。 请查阅调试 WordPress来获取更多信息。 (这个消息是在 6.7.0 版本添加的。) in /data/user/htdocs/wp-includes/functions.php on line 6114jnews-like
域的翻译加载触发过早。这通常表示插件或主题中的某些代码运行过早。翻译应在 init
操作或之后加载。 请查阅调试 WordPress来获取更多信息。 (这个消息是在 6.7.0 版本添加的。) in /data/user/htdocs/wp-includes/functions.php on line 6114Palmyra-Med is a model built by Writer specifically to meet the needs of the healthcare industry. Today we\u2019re thrilled to share that Palmyra-Med earned top marks on <\/span>PubMedQA<\/a>, the leading benchmark on biomedical question answering, with an accuracy score of 81.1%, outperforming GPT-4 base model and a medically trained human test-taker.<\/span><\/p>\n Base models have a breadth of knowledge, but when tasked with answering a domain-specific question, they tend to generalize answers versus follow user instructions and produce meaningful, relevant responses. This can be especially challenging when AI is applied to industries like healthcare, where accuracy and precision are a matter of life and death. To meet the needs of this industry safely and responsibly, we need models that are trained on medical knowledge and equipped to handle complex, healthcare-specific tasks.<\/p>\n To create Palmyra-Med, we took our base model, Palmyra-40b, and applied a method called instruction fine-tuning<\/a>. Through this process, we trained the LLMs on curated medical datasets from two publicly available sources \u2014 PubMedQA<\/a> and MedQA<\/a>. The former includes a database of PubMed abstracts and a set of multiple-choice questions, and the latter consists of a large collection of text on clinical medicine and a set of US Medical License Exam-style questions obtained from the National Medical Board Examination. Through the training process, we also enhanced the model\u2019s ability to follow instructions and provide contextual responses instead of generalized answers.<\/p>\n The results speak for themselves. On PubMedQA, Palmyra attained an accuracy of 81.1%, beating a medically-trained human test-taker\u2019s score of 78.0%.<\/p>\n From the beginning, our commitment has been to build generative AI<\/a> technology that\u2019s usable by customers from day one. And every decision we\u2019ve made along the way has supported us in that mission. At 40 billion parameters, Palmyra-Med is orders of magnitude smaller than PaLM 2, which is reported to have 340 billion parameters, and GPT-4<\/a>, which is reported to have 1.76 trillion parameters.<\/p>\n Supporting these massive models isn\u2019t economically viable for most enterprises, given hardware and architecture limitations. As a reaction to these scaling challenges and in an attempt to make these models commercially viable, some have pursued distilling down models, which has had negative effects on model performance. In a research paper titled \u201cHow is ChatGPT\u2019s Behavior Changing over Time?<\/a>,\u201d authors from Stanford University and UC Berkeley show that GPT-4 and GPT-3.5 have drastically changed their ability to answer the same questions within a three-month period. Given the black-box nature of these models, it\u2019s not possible to predict reliably how they\u2019ll operate over time.<\/p>\n In contrast, Palmyra LLMs<\/a> are efficient in size and powerful in capabilities, making them the scalable solution for enterprises. Despite being a fraction of the size of larger LLMs, Palmyra has a proven track record of delivering superior results, including top scores on Stanford HELM<\/a>. And because we invested in our own models, we offer full transparency, giving you the ability to inspect our model code, data, and weights.<\/p>\n We\u2019re deeply focused on building a secure, enterprise-grade AI platform. Writer will never store, share, or use your data in model training, and we\u2019re compliant with SOC 2 Type II, GDPR, HIPAA, and PCI. Our LLMs can be deployed as a fully managed platform or self-hosted, and unlike other services with waitlists and limited access, they\u2019re generally available to all customers.<\/p>\n We\u2019re excited for Palmyra-Med to empower healthcare professionals to accelerate growth and increase productivity. In the last few months, we\u2019ve already been in discussions with customers on creative and innovative use cases that leverage generative AI in the healthcare industry<\/a>:<\/p>\n If you\u2019re interested in learning more about how we built Palmyra-Med, read our white paper<\/a>. Ready to experience LLMs built for healthcare needs? Schedule a demo<\/a>\u00a0with our sales team today.<\/p>\nOvercoming the limitations of base models<\/h2>\n
Palmyra-Med takes top marks<\/h2>\n
Palmyra LLMs are designed for the enterprise<\/h2>\n
Paving the way for generative AI in healthcare<\/h2>\n
\n