this post was submitted on 28 Sep 2024
304 points (97.2% liked)

People Twitter

5034 readers
533 users here now

People tweeting stuff. We allow tweets from anyone.

RULES:

  1. Mark NSFW content.
  2. No doxxing people.
  3. Must be a tweet or similar
  4. No bullying.
  5. Be excellent to each other.

founded 1 year ago
MODERATORS
all 49 comments
sorted by: hot top controversial new old
[–] bamfic 10 points 6 days ago (1 children)

Weed too. Programmers on weed think they're more productive but just introduce more bugs than sober programmers. Similar ratios as LLMs too IIRC

[–] [email protected] 4 points 6 days ago

This is clearly wrong. They never even finished the damn project at all! Lol

[–] [email protected] 8 points 6 days ago (1 children)

If people aren't getting anything out of llms they're using them wrong.

If you blindly copy paste code you don't understand from chatgpt there's obviously a problem

Get it to generate single functions at a time with the requirements clearly defined and then actually read over the code and it's an amazing tool

[–] [email protected] 3 points 5 days ago (1 children)

I like built-in AI autocomplete more than an actual AI chat. It's amazing at filling in code that is really obvious so I don't have to type it myself. Anything even remotely complex will be subtly wrong though, so it's only good for reducing tedium.

[–] [email protected] 2 points 5 days ago

Exactly. It's good at generating anything you know well enough that you'll instantly spot the errors, but it shouldn't be used for anything you aren't fully comfortable with doing by hand

[–] Anticorp 10 points 6 days ago (2 children)

You're not doing it right if you're just straight up using whatever ChatGPT gives you. Use it to get the bulk of the code, edit it to work and follow project standards, then use it. CoPilot has certainly improved my productivity. There is a cost though. I am nowhere near as good as writing code from scratch as I was a year ago, because I haven't done it in around that long.

[–] [email protected] 4 points 6 days ago

Exactly, it's so rare to actually see anyone using it who knows how to use it as a tool rather than a magic do everything machine

[–] [email protected] -1 points 6 days ago (1 children)

I guess programmers are just idiots on average then huh.

[–] Anticorp 1 points 5 days ago (1 children)

Not in my experience. Quite the opposite, actually.

[–] [email protected] 1 points 4 days ago (1 children)

So why are they not on average experiencing an efficiency increase by using AI?

[–] Anticorp 1 points 4 days ago

I don’t know. Probably because it’s a new tool, and they haven’t learned its limitations yet.

[–] [email protected] 3 points 5 days ago

I wonder how they're measuring productivity, I don't think there's a good way to measure it tbh 😅

[–] gmtom 7 points 6 days ago

I mean, if you're already competent in what you're doing, it probably won't help, but I've found they are very helpful when learning a new system.

Like it might turn 10 searches about how a language works into 1 or 2.

So I don't have to look up type conversions, or proper loop syntax or if have to pipe in a a blank object into this function or I can just leave it null etc. And can just look up hoe to fix that 1 error it gave me instead.

It's pretty much the exact same cost-benifit as stealing code off of stack overflow.

[–] [email protected] 4 points 6 days ago* (last edited 5 days ago) (1 children)

Oh sweet irony... I remember butting heads with techbros on this very platform about their misguided intuition that people criticizing LLMs were going to be left out (assuming I or others had never tried them).

Yeah, I write enough bugs on my own, I'll pass on the 41% more, thank you very much.

Sure, I know this study needs to be replicated and should not be considered to be a holy truth... But the issues with the tech do pile up, and they're not just ethical concerns about resource usage. There's a new study like this one every week.

[–] SirQuackTheDuck 3 points 6 days ago

I got used to using Copilot for a project that had it be accessible for very junior programmers (thus: lots of explanation in comments). It worked great, creating a chunk of boilerplate for each described function.

It's absolutely useless in the real world. Code is 5+ years old, crosses over various coding conventions and does not use the stuff seen as default on StackOverflow.

Copilot couldn't figure out what I wanted. Intellij's long list of internal If-statements does the job though, and saves on a few households of power consumption.

[–] renzev 4 points 6 days ago

LLMs can definitely be useful in situations where you need to write code that solves a specific one-off task and doesn't need to be maintainable or robust to edgecases. Some prompts where LLMs saved me 15 minutes or so of work:

  • "Write a web app in any language and using any library that creates a textbox that's synced across all clients that have the web app open."
  • "Write a python PIL program that iterates over the pixels in an image... Now make it a command line tool with argparse that takes the image path as input"
[–] [email protected] 41 points 1 week ago* (last edited 1 week ago) (3 children)

One of the common uses I've heard is for generating boiler plate code. I have two thoughts on this. First, you actually have to understand what the boiler plate code is doing for it to be of any value. Second, there are already solutions for this that work just as well or better. Most of the major IDE's either support code templates or have extensions for that. You just have to be willing to take the time to create templates. I use Resharper with Visual Studio for this all the time.

I tried copilot free for a month and was not that impressed.

[–] [email protected] 1 points 6 days ago

Personally I think it's useful for pretty much anything you already understand

If you only use it to generate code to do things you already understand it saves you a lot of time and mental stamina by only having to proofread rather than write from scratch

[–] spicystraw 1 points 6 days ago

Brother, I am inherently lazy, I am not going to do any setup required for templates or add-ons. For simple stuff, ai generated boilerplate is good enough and it does not need maintenance as templates do.

[–] [email protected] 26 points 1 week ago (5 children)

A colleague once "showed off" how impressive Copilot supposedly was. I was like:

  1. Please don't let AI write unit tests. That's the one spot where I really don't need bugs done by automation.
  2. Don't you guys use snippets? I do that shit faster with snippets, macros and knowing my way around neovim.
[–] [email protected] 10 points 1 week ago (1 children)

I don’t have snippets set up for languages I’ve never touched before.

But copilot sucks. ChatGPT went super downhill. Claude is alright. If I know the language then it’s not that helpful. But if I don’t, or I don’t know the algorithm, then yeah, it’s super helpful.

[–] [email protected] 1 points 1 week ago

My LSP has some neat built-in snippets. ¯\_(ツ)_/¯

[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

I'd rather just write it out. I've never used snippets or macros per say, but I do make liberal use of regex replace and multiline cursors lol. Writing out a bunch of getters and setters? Regex!

I did try LLM tab auto complete and while sure, it did suggest some stuff that was useful (after refactoring it), the amount of time I spent WTF'ing some suggestions it made wasn't worth it.

I find more benefit from asking an LLM about something I'm undecided or confused about, and while it's never given me a good enough answer, it has stirred enough creative juices in my brain to help me along lol.

Edit: sorry for the dupes. When Eternity said it failed the send I took that at face value.

[–] oakey66 23 points 1 week ago (1 children)

Absolutely has been my experience. It's actually slowed me down because of the slop it tends to throw in there as part of the hallucinations.

[–] [email protected] 2 points 6 days ago

I found it sometimes useful when I first started using it, but seems like it's getting worse. It's also annoyingly inconsistent, sometimes generates the right thing, then you go to another file to make a similar change and it does something completely useless....

[–] BrokenGlepnir 18 points 1 week ago (2 children)

I used replit for education. At some point they added ai assistance. It was like pair programming with someone who is over eager and doesn't know why you're dying things.

When I'm teaching code, I don't need ai to finish my circle calculation program before I've explained the first line to students.

[–] [email protected] 5 points 1 week ago (1 children)

I, on the other hand, am very happy that AI can autocomplete the n-th similar filter function I need to write.

[–] [email protected] 9 points 1 week ago

In line completion of repetitive stuff is fine, even though it does often introduce bugs, meaning I still need to read every single char it writes. Now scale that up to entire functions, project that onto people that don't know the language/library well, and don't understand the function itself. That's just chaos.

[–] [email protected] 2 points 1 week ago

Last year, for the first time, a large majority of my students used chatGPT.

This correlated with their skills at the start of the year: the more they lacked (or were lazy), the more they used it. And at the end of the year, they were the ones who had learned the least.

I'm not playing the old fart who thinks young people are getting dumber and dumber. There are beginning to be studies on this, and my little experience is consistent with their results.

[–] [email protected] 16 points 1 week ago (2 children)

Honestly, it’s GREAT for replying to bull shit business emails.

[–] SpaceNoodle 7 points 1 week ago (1 children)

Bullshit countering bullshit. Finally, an actual use for LLMs.

[–] [email protected] 2 points 6 days ago

More than you think, because outputting vast amounts of pseudo-accurate bullshit is EXACTLY what spammers and scammers do.

[–] TootSweet 4 points 1 week ago (1 children)

Probably better be careful to proofread it. If you're about to be fired for something you let ChatGPT tell an important client, I wouldn't think "it was ChatGPT's fault" is going to make much difference in your favor.

[–] [email protected] -1 points 1 week ago* (last edited 1 week ago)

Oh yeah. It’s always my mail. Not ChatGPT’s mail.

It just helps me pump up the word count to a point where the suits think it’s acceptable.

I prompt it with two sentences and I get two full paragraphs.

[–] [email protected] 8 points 1 week ago

It helps me as autocomplete, but that’s about it. The increase in productivity is negligible. The problem I have is suddenly feeling dependent on it. It’s like with navigation software. We rely on it so much, suddenly we can’t navigate our way out of a paper bag when it’s not around. There’s something unsettling about having the floor pulled out from under you like that.

If I was simply someone trying to get a job done and drone through the day, sure, it’s probably fine. But I’m someone that needs to know and understand everything I put into my commits, and I actually enjoy coding.

[–] [email protected] 8 points 1 week ago

It doesn't do anything for me. It's good for simple code that I could've written myself in the time it takes me to make sure that what it generates is what I needed.

[–] TootSweet 7 points 1 week ago
[–] FMT99 5 points 1 week ago (1 children)

I don't know, I use GH copilot every day. Not to generate whole blocks of code but as autocomplete. It more often than not can finish my sentence for me. It's makes wrong assumptions some times but it most definitely saves me time. I was just using an IDE today not set up for copilot and it made me realize how much I miss it when its gone.

[–] [email protected] 1 points 6 days ago (1 children)

At the end of the day, it's a bit like a good editor setup. Sure, your crazy neovim skills might save you a minute or two here and there. But I've met excellent programmers that can code in anything, and be just as efficient as the Neovim nerds.

Programming is not just about writing code, you spend a lot of time thinking about it, which is a huge part of the work.

[–] [email protected] 1 points 6 days ago* (last edited 6 days ago)

I think things like helix and neovim are more about the dopamine hit from hitting the exact right sequence of buttons to make the change you want

Definitely feel faster using helix than vscode though, even just stuff like m+i+" (select all within quotes, brackets etc)

[–] officermike 3 points 1 week ago

As a hobbyist with no production-environment or critical coding projects, Google Gemini has been great for generating a starting point for Arduino projects if I otherwise don't know how to get going.

[–] [email protected] 2 points 1 week ago

I got fed up with my boss coming around and saying “would this work go any faster with AI?”, but now I’ve become AI-Positive and I take her bits of technical debt we’d like to clean up saying “we could use AI on this!”.

[–] [email protected] 2 points 1 week ago

That would be correct for me. If i know what question to ask i also know how to solve it. And its not good at solving beacuse it requires precision which is something that generative ai inherently dont poses. But its also why i think gen ai will reduce the number of artist very significantly beacuse art just need to be good enough. Or to be precise democrtize art. No longer will you need to have a band of 5 pepole and a smidge talent to make a metal song for yourself. Likewise if you want a sick reasonably specific wallpaper for your screen. Now idea will be good enough for a good enough result.

[–] [email protected] 1 points 1 week ago (1 children)

People should be using it to get their test suite correct > 90% first before attempting to put any ML generated code into production

[–] [email protected] 4 points 1 week ago

tests are code too. It makes you feel more productive to generate them, I guess.

If it's not good for the real code, it's not good for the tests

[–] hoshikarakitaridia 1 points 1 week ago* (last edited 1 week ago)

Been using copilot and another smaller AI model for hobby projects. Both of them slowed me down as much as they helped me with my code.

Now I've switched to supermaven free tier, and it's actually net positive for me.

It's not like it's writing whole functions for me, but it's actually quite context sensitive when it comes to defining functions and using them with the same parameters and things like that. I won't say I am 50% faster, but I could easily attribute a 30% development boost to it. Now mind you, for bug fixes and bug prevention that's back down to a 5% boost in speed, but that's still a net positive, so I'll take that.