by levifig on 11/11/15, 6:32 AM with 198 comments
by lucozade on 11/11/15, 11:49 AM
Back in the day I did most of my dev work on Solaris. I then spent 4 years as CTO as a startup that was pretty much only Windows.
When I subsequently went back to working at a unix shop I was initially struggling with vi as I tried to read some of the C++ code. I couldn't remember commands, was having to refer to the man pages every few mins. It was torture.
A couple of days in, I was writing up some notes in vi when someone walked past my desk and started chatting. When we finished talking I looked down at the monitor and I'd written more than I had when I was concentrating, nicely formatted, the works. Turns out "my hands" had remembered a load of what I thought I had forgotten.
For the next few days I had to keep finding ways to distract myself so that I could work efficiently. Eventually it all came to the foreground but it was the most bizarre experience while it was happening.
by SXX on 11/11/15, 10:45 AM
Once start working on Linux port he'll regret about that. Every developer that start with own platform-specific code end up using SDL2 anyway. Don't do that mistake.
by stinos on 11/11/15, 7:48 AM
I wish the author told me more about it than just this. Can somebody comment on how it compares to recent VS editions these days? About 5 years ago I also looked into using OSX as main OS. As I've always been using non-commandline graphic text editors and IDEs for most coding that made XCode the go-to environment but I just couldn't deal with it even though I tried. I don't remember all details but in general it just felt inferior to VS on like all fronts, with no advantages of any kind (for C++). Again, IIRC, but it did annoying things like opening the same document in multiple windows, one for editing and one for debugging or so? Anyway, what's the state today?
by curyous on 11/11/15, 8:08 AM
by lmolnar on 11/11/15, 1:37 PM
OpenGL on multiple monitors - this was much more difficult to do on MacOS. I had to create a separate window for each monitor, create a rendering context for each window, make sure my graphics code was issuing the drawing commands to the proper context, then have each context queue/batch "pending" rendering commands and issue them all at once at the end of a frame on a by-context basis. Whereas on Windows you can pretty much create a window that spans multiple monitors and draw to it with a single rendering context.
Input - I used DirectInput on Windows and wrangled a suitable implementation using HID Utilities on Mac, which was not easy given my lack of previous USB programming experience. A major annoyance was the lack of a device "guid" that you can get via HID Utilities to uniquely identify an input device - I had to manually contruct one using (among other things) the USB port # that the device was plugged into. Not ideal.
SSE intrinsics - my experience was that Microsoft's compiler was MUCH better at generating code from SSE/SSE2 intrinsics then clang - my Windows SSE-optimized functions ran significantly faster then my "pure" C++ implementations, where as the Mac versions ran a bit slower! My next thought was to take this particular bit of code gen responsibility away from clang and write inline assembly versions of these functions, but I took a look at the clang inline assembly syntax and decided to skip that effort. (I did write an inline assembly version using the MS syntax and squeezed an additional 15% perf over the MS intrinsic code.)
Prtty much everything else (porting audio from DirectSound to OpenAL, issuing HTTP requests, kludging up a GUI etc) was pretty straight forward/did not have any nasty surprises.
by mavdi on 11/11/15, 8:37 AM
by anton_gogolev on 11/11/15, 7:50 AM
by trymas on 11/11/15, 7:48 AM
What can make such considerable difference?
by kevingadd on 11/11/15, 8:18 AM
Most of these had to do with templates that expected the code inside them not to be compiled until they were instantiated. The Microsoft compiler has that behavior, while clang does not.
by stevoski on 11/11/15, 9:28 AM
It took two weeks to get the code compiling and running. That turned out to be the easy part. Getting the application performing well, feeling "native", and getting the bug count down took another six months.
I love Banished and I'd like to see a completed OS X port. But I'm not expecting this to be done, like, tomorrow!
by shmerl on 11/11/15, 7:51 AM
OpenGL on OS X is still behind the times, and so far it's not even clear if Apple will add Vulkan support when it will come out.
by packersville on 11/11/15, 3:18 PM
To this day I still don't see or find how it is useful.
by indifferentalex on 11/11/15, 11:43 AM
by 885895 on 11/11/15, 7:59 AM
Not just unix-like, OS X is certified UNIX.
by DeveloperExtras on 11/11/15, 10:14 AM
First make the iOS version. Then, port it over to Java. Then, port it over to C# or maybe ActionScript3/Flash.
This way, I can recursively update previous versions as the 'best solution' to interesting problems become most clear by the end of the 2nd or 3rd port. This gives the Objective-C/iOS version the attention it needs, and I can use the rapid application development features for each new port.
by maljx on 11/11/15, 12:08 PM
The code that is completely different on the platforms is stuff like HTTPS requests, open file dialog, create/delete folders.
by AdmiralAsshat on 11/11/15, 2:13 PM
Did the author buy a MacBook Pro just for this purpose? I'd assume this is his personal laptop, but his "Using a Mac" section sounds like he's not a Mac user even in his free time.
by jason_slack on 11/11/15, 5:37 PM
by jokoon on 11/11/15, 3:33 PM
Anyway, I don't really care anymore, I bought a thinkpad instead. Cocoa is just something I just can't even.
My experience has been pretty different. I'm not a professional developer though.