The title says it. What are the benefits of using if variable >= 1 then if if variable > 0 then works just fine?
They're not the same thing if you're not working with integers!
0 < 0.5
but not (1 <= 0.5)
!
In fact, this nature usually helps me decide which I want to use.
For example, for checking that an index is in-bounds, I write
if 1 <= i and i <= #list then
That's because anything lower than 1
is no good (but 1
is a good index). (Writing 0 < i
would work, but doesn't as clearly communicate the intent). The same reasoning applies for the other end.
If I was working with some strings and only wanted to handle two-digit numbers, I would probably write
if n < 100 then assert(#tostring(n) <= 2)
That's because this identifies 100
as the first problematic number, which is more important than 99
being the last OK number.
As a general readability tip, I find keeping all of my comparisons as <
and <=
, so that the values increase from left to right as we do when we are counting.
In particular, I find 0 < n and n < 10
much easier to read than what is more typical n > 0 and n < 10
which places the comparisons in an "inside-out" order just so that the variable appears first.
If your variable is an integer then there are no benefits besides clarity. Go with whichever sounds the clearest to you at the time.
x >= 1
implies to the programmer that x
should be at least 1
, whereas
x > 0
implies that x
must be strictly positive.
In some case the latter may sound clearer and make your code easier to understand.
Using < and > checks if number X is greater then/lesser then number Y. But using <= and >= checks if number X is greater then or equal to/lesser then or equal to number Y.