which AI is best for reading BMS manuals?
-
It will work.
Using RAG, and proper prompts, you can avoid hallucinations. You can also use distillation to only use a smaller model (and make it work locally if dockerised https://github.com/google-research/distilling-step-by-step).
Now… I really doubt about the added value. A CTLR+F is more than enough to find the info on a super well structure documentation.
And if you think about the planet, you will avoid using a LLM when a CTRL+F is enough (elastic search if you want to be fancy).
We have this debate at work on a regular basis, the hype on LLM is really going crazy. It’s cool, but use it when it’s needed…
-
@yop217 Are you serious right now?! You just equated using an LLM to pollution like there is industrial waste byproduct… “Save the planet, use CTRL+F and not an LLM!”
No offense, but that has to be the silliest thing I have ever read regarding this technology.
-
*chuckles
-
@SemlerPDX said in which AI is best for reading BMS manuals?:
@yop217 Are you serious right now?! You just equated using an LLM to pollution like there is industrial waste byproduct… “Save the planet, use CTRL+F and not an LLM!”
No offense, but that has to be the silliest thing I have ever read regarding this technology.
Actually, there is a significant cost to using LLM. think of it this way - throwing one plastic bottle in the lake isn’t going to ruin the lake but if everyone does it there is a problem. LLM requires significant computational power that then requires cooling. While it’s a small thing, it can be measured and it does/can/will have an impact like any other thing we do in excessive amounts.
Not saying its going to total ruin the planet but don’t dismiss it completely.
-
@SemlerPDX https://www.linkedin.com/pulse/carbon-impact-large-language-models-ais-growing-cost-vaidheeswaran-fcbhc/. If you look just a bit, you will see TONS of article dealing on that topic.
I found an estimate of 4.32 grams per query on ChatGPT.
It was also estimated few months ago that: “That means a single GPT query consumes 1,567%, or 15 times more energy than a Google search query.” It does not mean anything to be frank, it all depends on the model, the hardware, the query itself & so on. But the idea is that it does consume more. Now chatGPT gets around 10 millions query per day (old estimation probably more now)So it’s 10million * 15 times the consumption of a basic query… do the maths.
And 10million * 4,32g =>43 tons, daily https://piktochart.com/blog/carbon-footprint-of-chatgpt/I don’t trust these numbers… but it’s just providing a rough idea of what it looks like.
Yes, an using an LLM itself is not killing it (training it does consume shitload of energy). But it does have an impact due to the scaling effect (that is expected to keep growing).
It’s not silly thing as the industry is working on it extensively among other aspects. (ethical AI, overall consumption, biais, etc…)It’s far from being the #1 source of pollution, but my point is just: in this case, IMO, it’s a fun project, but I don’t see the need.
-
@tank2 said: But there are other AI tools to chat with your pdf document
Not wanting to use AI, I just tried literally chatting with my pdf document.
Opened it up and just said: “Whassup, dude…?”
As yet, there has been absolutely no response.
I will keep you posted.
-
@Aragorn All fine and dandy. But…
It will scare the sh!t out of you when one day that pdf answers you back.
-
@bbostjan would sound something like
“RTFM” -
@yop217 Again, absolutely no offense intended, but I’m pretty sure such things could be said for most advances in technology, from hardware to compute tasks… which would make the following statements valid along the same lines:
“And if you think about the planet, you will avoid using a 4K Monitor when a 1080p is enough (DSR if you want to be fancy).”
“And if you think about the planet, you will avoid using a voice search feature when a text input in the search bar is enough (WSR speech-to-text if you want to be fancy).”
…I’m not oblivious to the concept of increased carbon footprint with new technologies, but I feel telling people that “if they care about the planet, they will avoid a particular technology” is somewhat over-the-top as such statements oversimplify complex issues while ignoring potential benefits (and drawbacks of alternatives).
Also, originally, your statement was about LLM technology, and then you transitioned to the specific ChatGPT LLM online service when explaining your point. While I get that this was likely not your intention, I had not assumed by LLM you meant directly the ChatGPT service, and so (with regards to my original exclamation to your statement) is part of why I felt it was so silly: LLM is not synonymous with ChatGPT, but describes the technology that service uses. You can run an LLM locally, you can run a small one on a smartphone sized device … they could be incredibly low power or massively deployed on servers such as the ChatGPT service. Vilifying LLM technology as if all of it is the ChatGPT service is something to be aware of, and to be avoided.
While it takes significant work to build an LLM, which includes a fair amount of power consumption over time increasing the carbon footprint of generating such a model, it is a one-off cost, and once compiled, the model is set. And while it must be updated and added to over time, or even new versions of LLM’s being trained up from scratch or built upon the framework of predecessors, like any new software or technology, the actions of the consumer should not be so directly influenced by the minor differences in carbon footprint using one technology over another (in my humble opinion).
-
@Aragorn said in which AI is best for reading BMS manuals?:
Not wanting to use AI, I just tried literally chatting with my pdf document.
Opened it up and just said: “Whassup, dude…?”
As yet, there has been absolutely no response.
Fine, bro. Let it take its time to (suggestion)
I will keep you posted.
Fine again. I will let you take your time to (statement)
With best regards.
-
@Aragorn said in which AI is best for reading BMS manuals?:
As yet, there has been absolutely no response.
Or, was there? … Just didn’t noticed yet.