• 5 Posts
  • 481 Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle





  • for such things like shared documents/database entities also a shared test dataset should be available.

    Then they cannot play around and modify those outputs anymore without noticing of others. (because their unittests would fail)

    my assumption here is only an example. I dont know what youre dealing with.

    While I understand the rant. And am on your side regarding those jerk moves. its a management issue. even when they do not act, its up to you to bring this to attention if this seriously conflicts with your work.

    And in the long run its a win win for everyone.

    edit: I am working myself in early development and despite being an engineer by background Im coding. So I know quite well how difficult it is to make it properly instead of quick and dirty.













  • well. indeed the devil’s in the detail.

    But going with your story. Yes, you are right in general. But the human input is already there.

    But you have to have human-made material to train the classifier, and if the classifier doesn’t improve, then the generator never does either.

    AI can already understand what stripes are, and can draw the connection that a zebra is a horse without stripes. Therefore the human input is already given. Brute force learning will do the rest. Simply because time is irrelevant and computations occur at a much faster rate.

    Therefore in the future I believe that AI will enhance itself. Because of the input it already got, which is sufficient to hone its skills.

    While I know for now we are just talking about LLMs as blackboxes which are repetitive in generating output (no creativity). But the 2nd grader also has many skills which are sufficient to enlarge its knowledge. Not requiring everything taught by a human. in this sense.

    I simply doubt this:

    LLMs will get progressively less useful

    Where will it get data about new programming languages or solutions to problems in new software?

    On the other hand you are right. AI will not understand abstractions of something beyond its realm. But this does not mean it wont expedite in stuff that it can draw conclusions from.

    And even in the case of new programming languages, I think a trained model will pick up the logic of the code - basically making use of its already learned pattern recognition skills. And probably at a faster pace than a human can understand a new programming language.