Can there be enhancement to have a locally trained LLM within an enterprise boundaries instead of Open AI ... specific to enhance security / "data security" aspects of data schemas...
That's a really pertinent concern you raise Amit - I won't be surprised if a lot of enterprises desire to use a language model housed on-prem vs. using Open AI.
I'm going to stick with OpenAI for the early stage of development just because that allows me to prototype quickly but I'd love to build support for using on-prem open source models like Llama as well as I develop further.
would love to have such frame and recommend to folks with security concerns of open AI world. will be looking forward for same...post prototype days....
Can there be enhancement to have a locally trained LLM within an enterprise boundaries instead of Open AI ... specific to enhance security / "data security" aspects of data schemas...
That's a really pertinent concern you raise Amit - I won't be surprised if a lot of enterprises desire to use a language model housed on-prem vs. using Open AI.
I'm going to stick with OpenAI for the early stage of development just because that allows me to prototype quickly but I'd love to build support for using on-prem open source models like Llama as well as I develop further.
would love to have such frame and recommend to folks with security concerns of open AI world. will be looking forward for same...post prototype days....