arisuchan    [ tech / cult / art ]   [ λ / Δ ]   [ psy ]   [ ru ]   [ random ]   [ meta ]   [ all ]    info / stickers     temporarily disabledtemporarily disabled

/λ/ - programming

structure and interpretation of computer programs.
Name
Email
Subject
Comment

formatting options

File
Password (For file deletion.)

Help me fix this shit. https://archive.arisuchan.jp/q/res/2703.html#2703

Kalyx ######


File: 1519258436180.png (42.15 KB, 781x464, csharp.png)

 No.981

Learning C# at the moment. Can't say I hate it too much,,,
short x = 2;
short y = 2;
// short z = x + y;         -- won't compile
short z = (short) x + y // correct

This makes me laugh though. While x and y are both shorts, the result of their addition at runtime will implicitly be given the type int, and so requires an explicit cast to be assigned to another short. I understand nobody really needs to be working with shorts in C#, and if memory efficiency at that level is something you're worried about, then the bloat from the C# runtime will eat up way more memory that you'll be "saving" by using shorts instead of a 32bit or 64bit number. Still though, this makes me chuckle.

Can't say I dislike the language itself so far honestly; though I prefer a lower level language. While easy to learn and work with, it's no fun at all. The biggest drawback for me being that it's primarily a language used on windows. Who the fuck wants to use, develop for, or (god forbid) develop on windows?

Anyone else have thoughts / opinions / horror stories or just quirky anecdotes relating to C#?

 No.982

During execution the .NET virtual machine only understands 4 and 8 byte values, shorter values are expanded by the loading instructions and truncated by the saving instructions. I have no idea how they are handled in the memory but I wouldn't be surprised if in many cases they actually took up longer spaces than needed because word-aligned values are the fastest to read.

 No.983

>>982
Huh, that's pretty interesting. Makes a lot of sense though, most implicit conversions that I've noticed seem to be involved with either 32bit or 64bit numbers.

 No.985

Did you guys ever hear of the 'convergence' operator in C++?
It works like this:
int i = some_value;
while (i → 0)

this returns 'true' if i is converging to 0, and 'false' otherwise.
oops I made a mistake, it's (i– > 0)

 No.986

>>985
Never heard of it. Going to look it up as soon as I've written this post; however, speaking of novelty operators it's collection of operators related to null and nullables are pretty cool. For example, the "elvis" operator. Anyone have any clue as to why it's called that?

 No.987

>>986
It's a joke, there's no convergence operator, it's just post-decrement followed by a greater than. For some reason Arisuchan replaced that character sequence (minus-minus-greather than) with an unicode arrow, that's probably a bug.

I had no idea there was an Elvis operator, that's pretty funny. Wikipedia says it's because ?: looks like Elvis: https://en.wikipedia.org/wiki/Elvis_operator#Name

 No.988

> short x = 2;
> short y = 2;
> // short z = x + y; – won't compile
> short z = (short) x + y // correct

This actually makes sense. If you want to avoid overflow, you should ensure that adding two fixed-length integer types results in a longer fixed-length integer or a variable-length integer. (I doubt that's actually why; the explanations about .NET defaulting to 32- and 64-bit types sounds reasonable. However, your example would be logical in a language that tried to prevent certain common errors at compile-time without using necessarily complex type systems.)

 No.989

>>981
https://github.com/dotnet/csharplang/blob/master/spec/expressions.md#user-content-binary-numeric-promotions
Binary numeric promotions

Binary numeric promotion occurs for the operands of the predefined +, -, *, /, %, &, |, ^, ==, !=, >, <, >=, and <= binary operators. Binary numeric promotion implicitly converts both operands to a common type which, in case of the non-relational operators, also becomes the result type of the operation. Binary numeric promotion consists of applying the following rules, in the order they appear here:
[…]
Otherwise, both operands are converted to type int.

 No.990

>>988
When you put it that way, you are absolutely correct. I never even thought about that.

>>989
Thanks for that link.

 No.991

>>981
>Who the fuck wants to use, develop for, or (god forbid) develop on windows?

why are you doing this?
school?
work?
other: ____?

 No.1031

>>991
This.

OP what the fuck are you doing?

 No.1033

>>991
For fun. Even if I don't at the moment plan on using/developing in .NET, it won't hurt to know how; but who knows what will happen in the future. Plus, seeing how C# and .NET do things can help when designing software even for other platforms.

Just for the added perspective I guess, if nothing else.

 No.1091

>>988
Why isn't int + int = long then?

Why would C# be picky about shorts in particular? I could just as easily be adding numbers with a sum larger than 32 bits, but C# doesn't warn me about those at compile time.

 No.1092

>>1091
note that he said
> (I doubt that's actually why;

 No.1093

>>1092
>>1091
You don't have to guess, the Common Language Infrastructure standard clearly says at $I.12.1 that it's because the stack can only hold
int32
,
int64
and
native int
sizes:
The CLI model uses an evaluation stack. Instructions that copy values from memory to the evaluation stack are “loads”; instructions that copy values from the stack back to memory are “stores”. The full set of data types in Table I.6: Data Types Directly Supported by the CLI can be represented in memory. However, the CLI supports only a subset of these types in its operations upon values stored on its evaluation stack — int32, int64, and native int. In addition, the CLI supports an internal data type to represent floating-point values on the internal evaluation stack. The size of the internal data type is implementation-dependent. For further information on the treatment of floating-point values on the evaluation stack, see §I.12.1.3 and Partition III. Short numeric values (int8, int16, unsigned int8, and unsigned int16) are widened when loaded and narrowed when stored. This reflects a computer model that assumes, for numeric and object references, memory cells are 1, 2, 4, or 8 bytes wide, but stack locations are either 4 or 8 bytes wide. User-defined value types can appear in memory locations or on the stack and have no size limitation; the only built-in operations on them are those that compute their address and copy them between the stack and memory.
Most importantly:
>However, the CLI supports only a subset of these types in its operations upon values stored on its evaluation stack — int32, int64, and native int.
And:
>Short numeric values (int8, int16, unsigned int8, and unsigned int16) are widened when loaded and narrowed when stored.

https://www.ecma-international.org/publications/standards/Ecma-335.htm

 No.1095

>>1093
>>1092
I wasn't confused on why C# does it; but on why he said it made sense or was logical regardless of C#'s implementation details.

 No.1322

File: 1534958370507.png (205.61 KB, 622x494, gopher_joke.png)

OP here, ironically I now work in .NET; guess it just goes to show sometimes learning for learnings sake pays off anyway.

 No.1325

>>1322

Have you tried GoLang by the way? Did you get your .NET cert anywhere? I saw some courses I could follow online, and I really loved C back in the day, before Java was forced to us by our college.

Nowadays I'm spending a lot of time learning go and I never get to hear any negatives about the language.



[Return] [Go to top] [ Catalog ] [Post a Reply]
Delete Post [ ]