Re: swift and objective-c
Re: swift and objective-c
- Subject: Re: swift and objective-c
- From: Dietmar Planitzer <email@hidden>
- Date: Thu, 05 Jun 2014 10:04:11 -0700
- X_cmae_category: 0,0 Undefined,Undefined
The problem with the ‘..’ vs ‘…’ syntax is that it is very easy to overlook in a code review where one has been used when the other one should have been used because the dots occupy only a small amount of pixels on the screen which makes them hard to visually distinguish. IMO this design choice is more problematic and unsafer than the ‘=‘ vs ‘==‘ operator choice that C introduced decades ago and for which it has been criticized every since. Because at least a’=‘ occupies many more pixels on the screen compared to a ‘.’ and thus the difference between a single and two consecutive ‘=‘ is easier to see.
Overall I do think that Swift is a step in the right direction in the sense that we should move on from today’s ObjC to a language that is safer and makes writing code more efficient. However as I read through the Swift language manual and play around with it I find more and more things about it that irritate me:
a) on one side its syntax tries to be more compact and focused compared to ObjC - which I think is a good design goal. But on the other side Swift has a lot of unnecessary visual clutter in its syntax that distracts from the point that the code writer is actually trying to make. There are also syntax elements that are ambiguous in their meaning.
E.g.:
let x = 4;
To me there is nothing in that statement that would make it clear that we are actually defining a constant here. ‘let’ has been used in other programming languages like BASIC to assign a value to a variable and thus the question is why Swift uses it to define a constant. So why does Swift not use the keyword ‘const’ or ‘final’ instead?
const x = 4;
would make it unambiguously clear that we’re defining a constant.
E.g.:
func foo(x: Int) -> (Int) {…}
There are two problems with the ‘func’ here:
- it is redundant information and visual clutter.
- it is misleading and incorrect as soon as you actually define a method. A function != method. Those are two different things because a method is always defined in the context of a class while a function is independent of a class and a method is always executed in the context of an object which is a guaranteed part of its execution environment while a function does not have that luxury. Finally, methods are dispatched in one of various ways while functions are directly invoked without any indirection.
why not simply stick with the traditional ObjC and/or C/C++/Java style declaration syntax?
- (Int)foo:(Int)x {..}
or
Int foo(Int x) {..}
These declaration express everything we need to know about a function or method. They are precise and to the point and they are not misleading the reader into thinking that we would write a function (which by definition among other things can not be overridden in a subclass) while in actual reality we define a method (which may be overridden in a subclass).
Etc.
b) on one side the semantics and syntax wants to be safer than ObjC, but then on the other side it promotes unsafe practices and comes with its own surprising behaviors.
E.g.:
z = x + y
This piece of code contains a bug. The problem is that it is at first glance (and especially to the person that didn’t originally write the code) impossible to tell that it is actually buggy. What the original author actually wanted to write is this:
z = x + y * 2;
The reason why the bug found its way into the code and made it into the final product is because the author was interrupted when he wrote this code and he simply forgot to put the ‘* 2’ there when he resumed writing the code. But because semicolons are optional in Swift, the compiler did not complain. In ObjC the compiler would have reminded the developer that the statement is incomplete.
E.g:
var i = 2.5
may or may not be buggy code. It is impossible to tell from the statement because explicit type information is missing. It may be that x was originally a float value but the code got later on reworked so that ‘x' should now be an int value. I think that explicit typing should always be the default and that the modern language should encourage the developer to be explicit about the types he is working with. Consequently the variable declaration syntax should look like this:
float i = 2.5;
The type comes first because the nature of a variable is fundamentally defined by its type. Without a type the identifier is meaningless. To support type inference we could do this:
auto i = 2;
which at least would have the advantage compared to ‘var’ that it simply follows established (C++) precedence and thus it is related to the domain of ObjC/C/C++ development. But again, by default the language should encourage explicit typing.
E.g.:
Strings are passed by value. The reason that the language manual gives for that behavior is that it does this for safety reasons so that a string that we pass to a method can not be accidentally changed by it. However there are some bad consequences and surprising behavior that results from this design choice:
- it is unexpected behavior that ignores how strings have been defined and used in ObjC for many decades. I don’t see a good enough reason to change the behavior of strings in such a fundamental way just to prevent a method from accidentally changing a string because the "accidentally changing a string" problem is a *solved problem*. Cocoa solved this problem in a very elegant and powerful way 20 years ago by introducing the concept of a class cluster with an immutable base class and a mutable subclass. This allows a method to precisely state what its going to do with the string while avoiding the performance regressions that a by-value string design can introduce.
- (void)changeString: (MutableString)x
makes it clear that this method is potentially going to change the string.
- (void)changeString: (String)x
makes it clear that despite what the method name implies, this method is *not ever* going to change the string.
- by-value strings can cause performance regressions in surprising and unexpected ways because nothing in the declaration of a method makes it clear and obvious whether the method might actually change the string. As long as it doesn’t there’s no problem but as soon as it actually does change the string the string must be copied. Since strings have been traditionally (especially in C-based languages) by-ref, the average developer is not going to expect this behavior.
There are two additional areas that I could talk about, but those areas involve information that is still covered by the NDA and thus I won’t discuss them in public.
So overall I’m all in favor of creating a better, safer and syntactically more powerful successor to ObjC. But I’m not convinced at this point that Swift actually reaches that goal in its current form.
Regards,
Dietmar Planitzer
On Jun 5, 2014, at 12:07 AM, Chris Lattner <email@hidden> wrote:
>
> On Jun 3, 2014, at 8:19 PM, Jens Alfke <email@hidden> wrote:
>
>>
>> On Jun 3, 2014, at 2:16 PM, Ron Hunsinger <email@hidden> wrote:
>>
>>> - In Swift, a..b includes a and excludes b; a...b includes both endpoints.
>>> - In Ruby, it's exactly the opposite. a..b includes both endpoints; a...b excludes b.
>>
>> Oh, weird. I remembered the Ruby range operators when I read about Swift’s and assumed Ruby was the inspiration; but then why do them the other way around?
>>
>> (But to me, it makes more sense that three dots would give you a bigger range than two dots. Shrug.)
>
> The Swift approach is easy to remember: one more dot gives you one more value.
>
> -Chris
> _______________________________________________
> Do not post admin requests to the list. They will be ignored.
> Xcode-users mailing list (email@hidden)
> Help/Unsubscribe/Update your Subscription:
>
> This email sent to email@hidden
_______________________________________________
Do not post admin requests to the list. They will be ignored.
Xcode-users mailing list (email@hidden)
Help/Unsubscribe/Update your Subscription:
This email sent to email@hidden