It's funny that this is the first time I've seen a language explicitly condone "print debugging." It's one of those things that everyone says you're not supposed to do and then does anyway.
I think debuggers are not worth the cost for many kinds of debugging scenarios. They're great for stepping through projects you're not really familiar with or in situations where code seems to be in violation of baseline expectations, but fiddling around with breakpoints and watches and other UI particularities of the debugger carries more cognitive overhead than "print debugging". Additionally, I think the problem is better solved by using thoughtful and contextualized logging with appropriate severity levels. Couple this with a TDD approach to development and you'll end up in a workflow that is just faster than stepping through lines of code when you could have your assumptions verified through logs and test assertions.
Agreed! Personally I've found that I can find and fix problems much faster with a few print statements than with a debugger--debuggers can make it harder to trace through the full execution of a program.
> debuggers can make it harder to trace through the full execution of a program.
Except you often don't want a full execution, you often just want a partial execution where you suspect the problem arises. I can assure you that a good UI debugger is extremely helpful. Command line debuggers less so.
Yeah, I feel like people who don't like debuggers either don't use IDE's with amazing debuggers, or don't take the time to learn and understand how to use them. You can do powerful things with a debugger in seconds. I still use print statements under other environments. I think 'debug' logging is useful. I prefer logging over "print" any day of the week.
This is why unit tests are so helpful. You only debug the part of the code that's broken. It's a kind of a different way of thinking and people often write what I consider to be "bad" tests -- i.e. tests that don't actually exercise real portions of the code, but rather make tautological statements about interfaces. I spend a considerable amount of time designing my code so that various scenarios are easy to set up. If you find yourself reaching for a fake/mock because it is hard to set up a scenario with real objects, it's an indication of a problem.
This extra work pays off pretty quickly, though. When I have a bug, I find a selection of tests that works with that code, add a couple of printf-equivalents and then rerun the tests. Usually I can spot the error in a couple of minutes. Being an older programmer (I worked professionally for over 10 years before Beck published the first XP book), I'm very comfortable with using a debugger. However since I started doing TDD, I have never once used one. It's just a lot faster to printf debug.
The way I've explained it before is that it's like having an automated debugger. The tests are just code paths that you would dig into if you were debugging. The expectations in the tests are simply watch points. You run the code and look at the results, only you don't have to single step it -- it just runs in a couple of seconds and gives you the results.
You may think that the overhead of writing tests would be higher than the amount saved with debugging and if it were only debugging, I think that would be true. However, one of the things I've found over the years is that I'm actually dramatically faster writing code with tests compared to writing it without tests (keep in mind that I've got nearly 20 years of TDD experience -- yes... I started that early on). I'm pretty good at it.
The main advantage is that when you are writing code without tests, usually you sketch together a solution and then you run the app and see if it works. Sometimes it does pretty much what you want, but usually you discover some problems. You use a debugger, or you just modify the code and see what happens. Depending on the system, you often have to get out of the context of what you are doing, re-run the app, enter a whole bunch of information, etc, etc. It takes time. Debuggers that can update information on the fly are great time savers, but you still have to do a lot of contextual work.
It takes me some extra time to write tests, but running them is super quick (as long as you aren't writing bad tests). Usually I insist that I can run the relevant tests in less than 2 seconds. Ideally I like the entire suite to run in less than a minute, though convincing my peers to adhere to these numbers is often difficult. That 2 seconds is important, though. It's the amount of time it takes your brain to notice that something is taking a long time. If it's less than 2 seconds (and run whenever you save the file), usually you will barely notice it.
In that way, I've got better focus and can stay in the zone of the code, rather than repeatedly setting up my manual testing and looking at what it is doing. Overall, it's a pretty big productivity improvement for me. YMMV.
Using a debugger for testing multi-threaded code is particularly painful. Tests and logs are especially superior in this case because you can make complex assertions that capture emergent behavior of a multi-threaded application. Pausing threads to step through them can often make it harder to observe the behavior you might expect to see when multiple threads are working together in real time.
> I think debuggers are not worth the cost for many kinds of debugging scenarios.
That just means your debugger has a prohibitively high cost to use. If it takes more than 2 clicks to launch a full debugging session of your project, you need a new IDE.
The cost isn't in starting the debugger, it's getting into what you think is the right point in the execution of the program, and then stepping through one step at a time until you notice something is off.
With printf debugging, you can put print statements everywhere you think something might be wrong, run the program, and quickly scan through the log to see if anything doesn't match your expectations.
Most debuggers have conditional breakpoints and print-breakpoints, no need to step through line by line manually. If something strange happens just right-click and insert a print as the program is running.
With print-debugging if you find a bug you have to stop your app, insert print lines, recompile, redeploy, relaunch, click through your app to reach buggy location and then scan through log. This really feels like stone-age once you've ever used a IDE.
This "print debugging is bad" idea never made sense. Debugging is largely an effort to understand what's going on inside a program - having the program "talk back" to you via prints can be a great way to do this.
Sometimes, print debugging is the only practical way to fix a bug. For example, very rare bugs which can only be reproduced by running many instances of the code for a long time, or situations where attaching a debugger is not feasible (as in live services).
How so? In Rust, the macro is just a very thin wrapper around some output formatting boilerplate. In Haskell, it exists because you'd otherwise have to change the type of the function to add print-debug statements.
The debug printing method p in Ruby have been returning it's argument for years now. Dbg seems possibly better for the usecase though as you get the location as well.
For those of you who don't know Ruby, its map/filter/reduce functions chain like this:
values.map { |x| x + 2 }.select { |x| x > 3 }
So when you want to look at an intermediate result, there's the .tap() method that runs a lambda with that intermediate result, then passes it on to the next step in the chain.
[0, 1, 2, 3].map { |x| x + 2 }.tap { |x| puts x }.select { |x| x > 3 }
This returns [4, 5] after printing [2, 3, 4, 5]. ("puts" is Ruby's println.)
Took me a while to find it, but Rust's iterators have an `.inspect` method that gives you a read-only reference, so println debugging works fine. For more advanced tap-like stuff, use the `tap` crate (which allows you to write `array.tap(|xs| xs.sort())` for example, even though `sort` mutates in place and doesn't return the array).
Almost. dbg! also shows the statement in addition to what it evaluates too. That is:
dbg!(n * factorial(n - 1))
shows up as:
[src/main.rs:5] n * factorial(n - 1) = 2
[src/main.rs:5] n * factorial(n - 1) = 6
[src/main.rs:5] n * factorial(n - 1) = 24
Although, IIRC there was some crazy dumper module by dconway (who else) that worked similar to this, but I don't recall if it returned the value.
...
So, I just looked it up, and I found it. Actually, Damien wrote two. One that he updated from 2014 to 2016, and one he started in 2017 and has maintained to the present. I have no idea the reason for that.
Yep leave it Damien. I remember he did something similar for a test module where it would show the expressions that were evaluated in the test failure output.
The `p` function in ruby is designed for quick pretty printing. `console.` in the browser are only* useful for debugging. Go has a `println` keyword so that you don't need to import the `fmt` package, and has a `%+v` format helper to get a pretty printed debugging friendly representation of an object.
Elixir has a very similar function, IO.inspect(..), which prints a debug version of its argument and then returns it, very similar to dbg! here. Coupled with Elixir's pipeline operator `|>` which passes the output into the first argument of the next function, it's pretty handy: foo() |> IO.inspect |> bar()
This is just not true. For example, some bugs are so rare that you can't hope to reproduce them by just looking at "a live running instance". In this case, print debugging can be the only practical way to go.
Do people really not want people to print debug? The professors of all my CS classes so far (from data structures to assembly) explicitly tell students to print debug.
I would hope at some point you would get to the point where you move past this, once you get to Enterprise scale print debugging stuff would be slow and laborious.
Python/Ruby have long had a tradition of printf + unit test debugging. I assume Python has a debugger now but I honestly don't know anyone who uses it.
I use it all the time. If my program crashes, I swap `python foo.py` with `ipython --pdb -- foo.py` and it will drop me into an ipython session at the exception so that I can inspect variables and the backtrace.
Also, if I want to pause at a point and step from there, just drop `import ipdb; ipdb.set_trace()` at the line I want to set a breakpoint.
I use it all the time as well and know many people that do. It was no longer than yesterday that I discovered it worked in a jupyter notebook ! So I assume enough people use it that it was integrated
print is a considered a big antipattern in python since it's equally easy to use the built in logging.debug. With pycharm launching the debugger is as easy as right-click a file and press debug, i don't know anyone who doesn't use it.
That might just have been a failure with the GPS -- I've had the same issue before, but the car icon was offset a bit from the highway, so it looked like I was constantly driving on the grass next to it (that must have come as a surprise to the little imps inside my phone doing the navigating).
While I agree that the criticism "you're just trying to look smart" is misguided, it's a bit ironic that his response had the implication that "if I know this thing and you don't then you should be embarrassed [because I'm smarter/better than you]."
(I think that was the source of much of the objection to the comment, and the post makes no attempt to apologize for it, even if it was unintended.)
The example they give isn't really convincing, to me. I can see the usecase for this kind of language, but for e.g. searching for a pattern on the shell that isn't just one of a few predefined special cases, it seems like it'd still be a lot easier to compose regexes on the fly.
One feature I really miss in a lot of chat clients (that this doesn't appear to have) is the ability to reply to specific messages, like in Stack Exchange chat or Telegram. It makes it a lot easier to follow several simultaneous conversations, or groups with many members.
I've always thought of the main IRC channel like being in a crowded room. You can listen in on various conversations, but if you want to have a conversation with one or more people, you move to another location and converse amongst yourselves either by private message or another (tempoarary) channel.
Threaded conversations are better handled by email or newsgroups as opposed to chat in my opinion.
I don't think that's really feasible with IRC unless you invent your own extension to the standard and then convince every other IRC client to adopt it.
Couldn't it be done in the UI by referencing the line/message ID, kinda like how forums "quote" the message you clicked Reply on, or 4ch references a link back to the one you're quoting?
Let a user click the timestamp to the left of the messages (or a "#" or other symbol), which inserts something like this into the reply box:
> Replying to username:
Then your message you send ends up looking like:
> Reply to username: I agree!
Clicking on "Reply to username" or just the username could scroll back the chat and highlight the message you replied to.
One problem with IRC is that everyone is using a different client. It's like how you can set it up so you receive messages when you're offline... but nobody else does that, so it's not a real solution.
Of course, the context here is a client that does indeed implement extra features, and you're right that a text format per message would be an interesting way to implement this: the client could simply hide those messages from the normal view and slot them into the thread system. People using other clients would just see some extraneous line noise like "Replying to <username>|<timestamp>: hello world".
Yeah exactly, it's still "useful" to people without the client just by showing them it's a reply to someone else, even without the embedded link and extra functionality the client adds. I think it's a good compromise.
The idea that language limits or constrains the way people think (the stronger side of the Sapir-Whorf hypothesis) is now largely discredited by the majority of linguists.
I speak 8 languages from many separate groups (Germanic, Romance, Slavic, Sino-Tibetan, Japanese), and my impression is that languages truly put constraints on one's ability to formulate thoughts. Of course, mentalese, the unspoken one, is different, independent, but it's rare people retain it to their older age and by then prevailing cultural/linguistic currents shape their thoughts. Sapir-Whorf might be discredited as general hypothesis, but it might correspond to 95% practical state statistically-wise.
I don't know what he is referring to. But one interesting things is that there is a language where you can express that someone is doing something without including the time when he is doing it. This is not possible in the languages I know of (De/En/Fr).
I can imagine that this ability makes different thoughts possible.
When looking at programming languages, it is indeed the case that your "lingua franca" limits the imagination of what software architectures are possible to solve the problem.
Yet we have a wide variety of programming languages, and switch between them as the need arises for a better fit to the problem domain we're working in. With the wide variety of things like number systems in different languages, wouldn't it make sense that using some languages might be better for mathematical communication, and others maybe not offer the precision required to communicate everything necessary? It would make sense to me if some languages were better for talking about navigation, or weather than others. Or some were better with numbers, or parts of anatomy, etc.
That's a softer form of it which is fairly obviously true; a language with no words for weather will obviously limit discussion of the weather forecast. That's a bit different to whether it limits your ability to think about those things though.
Do you speak more than one natural language? I am an anglophone but took most of my schooling in french. Though I work in english now, I can remember distinctly 'thinking in English' and 'thinking in French' and depending on the problem I was wrestling with, the constructs of the language you frame your thinking in absolutely can give you an advantage or disadvantage.
I always thought it was remarkable how much human thought was imprisoned by language, and it really makes me wonder what a human 'without language' would be capable of thinking.
One other than my native tongue, but not nearly fluently.
To clarify, I don't disagree with what you're saying; I'm not a linguist but from what I know there certainly seems to be some truth in the weaker forms of the Sapir-Whorf hypothesis (that language affects how you think), but the strong form (that language controls how you think) seems fairly well disproven these days.
And now for the forecast of the local atmosphere state: In the morning, the yellow ball in the sky will be hidden by areas of water aerosol. Around noon, water will fall from the sky.
Crazy idea: what if the first letter of the domain name was used to determine the first letter of the Unicode name of the emoji to select? That way there'd be some kind of mnemonic instead of having it be purely random.
It makes me a little uncomfortable that they're using curl|bash for something as simple as "put this 10-line script somewhere in your $PATH," especially when the script involves sudo (to move into /usr/local/bin). Sure, it's easy to inspect the script and see that it's not doing anything malicious, but it makes install processes like this, where it'd be incredibly easy to, seem normal.
I didn't even see that. I followed the directions in the top section ("Clone this repo and copy the rb file to somewhere in your path (or just copy and paste the above).") (I did have to chmod +x)
That said,
brew install foo
is a normal part of many development workflows, which essentially just curls a file from someone else's git repo.
I mean, if you think about it, any time you run any installer, whether via brew install, apt-get install, or an .exe or .msi, you're effectively running someone else's unknown code on your system, often as a superuser (e.g. with sudo apt-get install). Is there a significant difference here? At least in this case you could potentially download the shell file and read it before you run it, unlike with a binary executable.
Does any other language have a similar feature?