Random Thoughts – Randosity!

Is the iPhone X Innovative?

Posted in Apple, botch, california by commorancy on September 17, 2017

Clearly, Apple thinks so. I’m also quite sure some avid Apple fanboys think so. Let’s explore what innovation is and what it isn’t and compare that to the iPhone X. Let’s explore.

What is innovation?

Innovation effectively means offering something that hasn’t been seen before, either on other devices or, in fact, at all. I’ll give an example of this. If I create a transporter that can rearrange matter into energy and safely transmit it from point A to B and reassmble it into a whole, that’s innovation. Why? Because even though the concept has existed in the Star Trek universe, it has never existed in the real world. This is true innovation and would ultimately change transportation fundamentally as we know it. Though I won’t get into the exact ramifications of such an invention, suffice it to say this technology would be a world game changer. This example is just to show the difference between true innovation and pseudo innovation. Innovation should be a world game changer to be true innovation.

So then, what is pseudo innovation? This type of innovation, also known as incremental innovation, is to take an existing device and extend it with a natural progression that people expect or, perhaps, have even asked for or because other devices on the market have already added it. As an example, this would be taking a traditional blender and exchanging the blender bowl with a small single service container that can double as a cup. This is a natural progression from an existing blender to a more useful and functional device. This is the kind of change that doesn’t change the world, but solves a small problem for much smaller subset of people.

iPhone X Design

Let’s dissect this design from top to bottom to better understand it better and understand why the iPhone X is not in any way truly innovative and only presents pseudo innovation.

  • OLED display While this is new to the iPhone, it is in no way new to mobile devices. Samsung has been shipping tablets and phones with AMOLED displays for years now. In fact, I’ve personally owned the Samsung Galaxy Tab S for at least 4 years that has a Super AMOLED display. This display has been amazing and remains that way to this day. Apple is substantially late to this party for the iPhone. While it’s new to Apple’s devices, OLED is not in any way a new technology created by Apple. Worse, Apple hobbled their OLED display with the unusual design of that large black brow at the top. I still have no explanation for covering 10% of the display with an unsightly black bar. Worse, when videos play or other active content is viewed, 1/10 of that content is now being obscured by that black bar unless you change the settings. Such a questionable addition to an expensive phone.
  • Removal of Touch ID This is actually negative innovation. Removal of useful features from a device serves only to leave more questions than answers. Touch ID is a relatively new addition to the iPhone. That Apple shipped the iPhone X without it is entirely unexpected. Apple should have postponed the release until they got this right. Touch ID is an intrinsic, non-intrusive technology that works in all conditions, secures the device using biometrics and offers a much safer alternative to login IDs and typing passwords (something entirely cumbersome on small phone devices).
  • Addition of Face ID — Face recognition on a phone, while new to the iPhone isn’t a new technology, nor was it created by Apple. Cameras have been capable of recognizing faces when taking photos, but it does not necessarily take the step to identify the person. Apple takes it to the identification level with Face ID. In fact, it takes it to the next step to use it to identify the owner of the phone. However, this is an untested new technology when used on a phone. While computers with hefty internet connections have been capable of performing this type of fast facial recognition, a phone will require a cloud service to provide such an identification. This means that your facial information will need to transmit to a cloud service and attempt to determine that you are you. It also means that this picture information may be stored on Apple’s servers for this purpose. It also means there’s a huge privacy concern here if Face ID captures something it shouldn’t have. Touch ID is never susceptible to this privacy intrusion problem.
  • Wireless ChargingAgain, Samsung devices have had wireless inductive charging for years. This addition, while new to Apple’s phones, is not in any way innovation. Wireless charging has previously existed on other non-Apple devices and, again, has not been created by Apple. Apple has embraced the Qi wireless charging standard up to a point. However, Apple has denied iPhone devices from using Qi fast charging, instead choosing to offer up Apple’s own standard sometime in 2018.
  • Fast Charging — This allows the phone to charge the battery perhaps 5x faster than the iPhone currently charges today. This is separate from Wireless Charging, but Wireless Charging can take advantage of it.
  • Edge to Edge DisplayWhile Apple’s implementation of this screen seems edge to edge, it really isn’t. There is a small bezel around the display due to the way the case is designed. While it is probably the most edge to edge display we’ve seen in a phone to date, it isn’t the first. Samsung’s Galaxy Note 8 offered at least side to side edge to edge display and a reasonably small top and bottom bezel. Suffice it to say that what Apple has done is merely semantics. Now, if Apple hadn’t added that questionable brow covering 10% of the display, it might have been a small achievement.
  • Faster CPU, more RAM, faster overall performance — To be expected in any new release, though it will be outdated quickly

In fact, none of what has been included on the iPhone X is in any way newly created ideas by Apple. Apple is firmly playing catchup with the Joneses (or in this case, Samsung). Samsung has already produced phones with every single one one of the technological advances that Apple has put into the iPhone X.

Fanboys might claim that the iPhone X is all new. No, it’s all nuances. Apple is simply catching up with existing technologies and ideas to improve their new phones (and I use the word improve loosely). There is nothing actually innovative about the iPhone X. In fact, from a design perspective, it’s probably one of the ugliest phones Apple has yet produced. The brow seals that fate. If there were such Razzie awards for design, Apple would win it for 2017.

iPhone 8

This is one of those things that always irks me about Apple. That they’re releasing the iPhone 8 at all is a bit of a mystery. If you’re introducing a new phone, why keep this line of phones at all? Bet the bank on the new model or don’t do it. This is what Apple has always done in the past. That Apple is now hedging its bets on two different models seems a bit out of ordinary for a company that has typically bet the bank on new ideas. I guess Apple is getting conservative in its old age.

Other than wireless and fast charging introduced into the iPhone X, nothing else has trickled its way into the iPhone 8. Effectively, the iPhone 8 is simply a faster iPhone 7 with Qi wireless and fast charging support.

Let’s talk about wireless and fast charging a little here. While the iPhone 8 is capable of both wireless and fast charging, it won’t come with it out of the box. In fact, Apple’s fast wireless charging pads won’t be released until sometime (probably late spring) 2018. While there are other Qi Wireless chargers you can buy now, these chargers won’t fast charge. Worse, the iPhone 8 still ships with the standard Lightning USB cable and standard speed charger. If you want fast charging, you’re going to need to invest in the extra accessories (cables and chargers) to get that faster charging performance. Until Apple releases its wireless charging pad, you can’t even get wireless and fast charging together. In addition to your phone’s cost, expect to dump an extra $100-200 on these accessories (several times if you want something now and then again when Apple releases its accessories).

Mac Computers

Just to reiterate the point of lack of innovation, I’ll bring up one more point. The MacBook and Mac line of computers has been so stagnant and so far behind the times, I’m not even sure Apple can catch up at this point. While every other non-Apple notebook on the market (even the cheapest, smallest model) now includes a touch display, Apple continues to ship its Mac computers without touch surfaces in defiance of that trend. There’s a point where you have to realize that touch surfaces actually are a necessity to computing. The ironic thing is, we have Apple to blame for this dependency by Apple introducing the original iPad.

Yet, Apple’s stubborn stance on introducing touch displays on the Mac has actually become a sore point with these devices. Apple, lose your stubbornness and finally release touch friendly MacBook computers at the very least. Though, I’d like to see touch screens on every Mac computer. You’ve had Spotlight on the MacOS X for years now (the first step towards touch displays), yet here we are with one computer that has a Touch Bar. The Touch Bar is such a non-innovation as to be a step backwards.

Let’s just get rid of the worthless Touch Bar and finally introduce Macs with touch displays, which is what we want anyway. Since we’re playing catchup, let’s finally catch the Mac line up to every other non-Apple notebook.

Apple’s Worms

It’s clear, Apple has lost its innovative ways. Apple is now relying entirely upon existing technologies and ideas, firmly throwing together half-assed ideas and calling them complete. The iPhone X idea should have been tossed before it ever saw the light of day. Had Jobs been alive to see it, the iPhone X idea would have been tossed out the window in lieu of a new idea.

Additionally, Apple’s technology ideas across its product lines are entirely fractured:

  • The iPhone ships with Lightning connectors, but no other device in Apple’s line up supports Lightning
  • The iPhone has removed the 3.5mm headphone jack for no other reason than, “just because”
  • New Macs now ship with USB-C, yet none of Apple’s mobile devices support this standard
  • USB-C Macs require dongles because none of Apple’s accessories support USB-C (other than the converter dongles)
  • The Apple Watch has no direct integration with the Mac. It only integrates with a single iPhone.
  • Apple ships Lightning head phones that can only be used with the iPhone line, not Macs
  • Macs still fail to support touch displays
  • Macs still ship with 3.5mm headphone jacks
  • Apple’s magsafe adapters were amazingly innovative to supply power to the system, yet have been tossed out in lieu of the inferior USB-C connector
  • The iPhone and Mac are only half-assed integrated with each another. The best we get is USB connections and Airdrop. The Universal clipboard only works about half the time and even then it’s not always useful depending on copied content. The single app that works quite well is iMessage. In fact, the entire reason this integration works at all is because of iCloud.

Innovation is about putting together ideas that we’ve never before seen and that take risks. It’s about offering risky ideas in creating devices that offer the potential of changing the game entirely. There’s absolutely nothing about the iPhone X that’s a game changer. Yes, I do want an iPhone with an OLED display because I want the super high contrast ratio and vibrant colors. If that had been available on the iPhone 8, I’d probably have upgraded. For now, there’s no reason to upgrade from any of Apple’s most recent products. Wireless charging just isn’t enough. A hobbled OLED display is just not worth it.

Tagged with: , ,

Game Review: Assassin’s Creed Syndicate

Posted in video game design, video gaming by commorancy on November 9, 2015

Warning: This review may contain spoilers. If you want to play this game through, you should stop reading now.

acsWhile Ubisoft got some parts of this game right, they got a lot of the parts very very wrong. And, this game cheats, badly. Let’s explore.

The Good

As with most Assassin’s Creed games, Syndicate is filled with lots of very compelling gameplay in its open world environment. The stories are decent, but short and the assassinations make it feel like Assassin’s Creed I (mostly). They’ve done well to bring back a lot of what made Assassin’s Creed I fun. Unfortunately, there’s also a whole lot of bad go with that fun. And, if you’re sneaky enough, you get the chance to use cover assassinations, air assassinations and haystack assassinations with much more regularity. Unfortunately, this game is about equally outweighed by the bad and the ugly.

The Bad

Controls

As with every single Assassin’s Creed game, the controls get harder and harder to work as the game progresses. And by harder and harder, I mean the designers require much more fine grained control over button presses or else you miss the opportunity to do whatever it is they have you doing. This usually means you miss your opportunity do take down an enemy, you fall off of a building, you can’t escape a fight or whatever.

For example, a person steals something and you have to tackle the thief. Unfortunately, as you happen to be running after the thief, if you also happen to straddle along side a carriage, the carriage will usurp the tackle button and you’ll end up stealing a carriage (all the while letting the thief get away). The really bad part is that you cannot break out of the carriage stealing maneuver and attempt to continue on with the thief chase. Oh no, you have to watch the entire motion capture playback from beginning to end all while your thief you were inches away from tackling runs away.

As another example, there are times where you begin a fight and a ton of enemies surround you. Then, one of them takes a swing and practically knocks you out with one blow. You don’t even get enough time to press the medicine button before you’re dead or desynchronized.

On top of this, the game still does not tell you every side mission requirement in advance. You only find them out after you’ve failed them.

Zipline Gun

And this is not the only incident of these types of bad controls. Once you get the zipline gun, it’s handy to use for quick getaways to the top of a building. That is all except, when the designers prevent you from using it. And they do prevent its use intentionally in some areas. Meaning, you can stand in front of some buildings and the zipline control appears. In others, nothing. This is especially true in areas where you have to complete a mission. So, you’ll be down on the ground and spotted, the first thing to do is find a rooftop to zipline to the top. Unfortunately, you can’t in a lot of mission areas. In some you can, in others you can’t.

Ubisoft, if you’re going to give me the zipline gun, let us use it on any building of any size. Not just those you randomly allow. This is so frustrating.

Calling Attention

When you’re sneaking around as an assassin, the pedestrians around you are constantly saying things like, “I hope he knows he can be seen” and other stupid things. While it doesn’t bring attention from enemies, it’s just nonsensical and stupid. Most people would merely ignore someone doing something like skulking around. Worse, it’s not like we have control over day or night in this game. Clearly, for most of the work of an assassin, it should be done at night under the cover of darkness. Instead, you’re out doing this stuff at noon.

Syndicate

Syndicate? What syndicate? Sure, you have a gang that you can find and call together on the street, but you barely ever get to use them alone let alone on missions. You can rope in a few at a time, but it’s almost worthless. When you enter into any place, they only thing they end up doing is drawing attention to you. As an assassin, that’s the last thing you want. You want stealth kills, not big grandiose street kill events. This is not Street Fighter. Other than that, there is no other syndicate. It’s not like you can switch to and play Greenie, which would have been a cool thing. It’s not like there were other assassins roaming the city that join in on the cause. I was hoping the syndicate would have been a huge group of assassins who all band together to get something done. Nope.

Recognition

On some levels, you don’t get recognized quickly. On others, it’s almost instantaneous. It’s really frustrating that there is not one level of recognition that you get with this game. Instead, it’s random and haphazard based on the level designer’s whim.

The Ugly

Glitchy

While it may not be anywhere near as bad as Unity, it’s still bad enough that you have to start (and restart) missions over to complete them. I’ve had glitches which locked my character up in a move that I had to quit out of the game to stop. I’ve had glitches where Jacob falls off of a rooftop merely by standing there. I’ve had glitches where I stand inches from an enemy and don’t get the assassinate action. I can hang below windows with enemies standing in front of me with no assassinate action. I’ve fallen off of the zipline for no reason.

The controls get worse and worse as the game progresses, to the point that if you want to get anything done, you nearly can’t.

Cinematics you can’t abort

Throughout the game, you’ll find that when you click a button to enter a carriage or zipline to the top of the building, you cannot break out of that action until it’s fully complete. If you were trying to do something else and accidentally launched into one of these cinematics, you have to fully complete the action entirely before you get control back.

Character Levels

The introduction of character levels is just plain stupid. I understand why they are in the game, but the reality is, they make no sense. Fighting a level 9 versus a level 2 is not at all realistic. You don’t have levels in real life. You have people who are more skilled than others, but not levels. These enemies are no more skilled than any other. If I walk into an area, my level should not dictate how hard it is to kill an enemy. I should be able to perform moves on a level 2 or level 9 in the same way and take them down at the same rate. In fact, enemies shouldn’t even have levels.

Bosses & Gang Wars

As you complete a section of the city, it unlocks a gang war segment. So, your gang fights their gang. Except, it’s not really a gang war. Instead, it’s half a gang war. The first segment starts out as a gang war where your gang fights theirs and you get to participate. After that first segment is complete, you must fight 5 to 6 of their gang members alone (including the boss). That’s not exactly a gang war. That’s an unfair fight. Where is my 4 to 5 other gang members to help me out. If it’s a gang war, make it a gang war. If it’s to be a 1 on 1 fight then make it so. Ganging up 5 or 6 against 1 is not a gang war and is in no way fair. I know some gamers like beating these odds, but I find it contrived and stupid. If it’s supposed to be a gang war, make it a fight between gangs.

The only consolation is that the game gives you one shot at taking down the section boss right before the gang war. If you can manage to kill them then, you don’t have to do that segment during the gang war. Still, a gang war should be about gangs.

Desynchronization and Load Times

This is one of the most ugly parts of this game. If you fall off a building and die, you have to wait through an excruciatingly long load time. So long, in fact, you could go make yourself a cup of coffee and be back in time for it to finally load. I mean, this is a PS4 and the game is loaded on the hard drive. Yet, it still takes nearly 2-4 minutes just to reload a level? I’m amazed (not in a good way) at how long it takes to reload. Once the game finally does reload, it drops you off some distance away from where you were. This is also frustrating. Why can’t you drop my character exactly in the location or at least close enough that I don’t have to run a ton just to get back there.

Starrick Boss Level

This level is ultimately the most asinine fail level of the entire game. Once you finally find the shroud (which is the whole point to the present day piece of this game), the game should immediately stop and move to present day. No. Instead, you have to attempt to assassinate Starrick in one of THE most asinine levels I’ve ever played in a game.

Evie and Jacob, the two twins, have to be the two most stupid people on Earth. Otherwise, they would simply realize they could cut and drag that shroud off of him with a good cut of their knives and then stab him. No. Instead, you have to attempt to wear-him-down while wearing the shroud. As if that were possible with the supposed healing shroud. If it were truly as healing as it is shown to be, there would be no way to wear his health down ever. I’m not sure what the writers were thinking here, but this level is about as stupid as it gets.

Worse, there are times where Starrick gets these hammer-on-your-character-without-fighting-back segments. Starrick just punches your character and you just stand there taking it. Really? There’s no reason given for these segments. These just wear down your health without any method of fighting back, breaking out of it or countering it. Now that’s just plain out cheating from a game. There is absolutely no need for this part of the fight. When in real life would this ever happen? Like, never. It makes the ending twice as hard without any real payoff.

Either of the twins could cut and pull the shroud off of him. It’s very simple. Then just assassinate him like anyone else. Why is it that you must melee this guy to death? These are assassins who kill from the shadows or by using other stealth methods. Assassins are not street fighters. That the game turns AC into Street Fighter is just plain stupid. This is NOT WHY I BUY Assassin’s Creed games. If I wanted a fighting game, I’d go buy Mortal Kombat or Street Fighter. The game devs have lost it. Whomever thought it would be a great idea to end this Assassin’s Creed game by turning it into a stupid fighting game should leave the game development field and specifically be fired from Ubisoft. That person has no business making gaming choices for this (or any) game franchise.

Overall

I give this game 4.0 stars out of 10. It’s a reasonable effort in places, but it’s in no way innovative and the ending plain out sucks from so many perspectives. The zipline is cool, but it doesn’t really help you as much as it needs to. There’s way too much carriage driving. The boss levels are mostly okay up until Sequence 8 as a Street Fighter ending… especially considering that the ‘present day’ part only needed to confirm where the shroud was located. After locating the shroud, the game should have immediately transitioned to present day. There is absolutely no need to kill Starrick, especially in a Street Fighter way. These people are assassins, not fighters. Sure, they can fight, but this tag-team-switching-melee-brawl-that-only-intends-to-wear-down-health is just insanely stupid, especially considering just how quickly that fight would be over by cutting that shroud off of him. I don’t even know how many times either of the two of them had gotten close enough to yank that thing off of him. Yet, the game insists on throwing punches to bring him down.

Ultimately, it has an insanely stupid ending that is majorly out of character for a game franchise that deserves so much better and which offered so much promise. And, of course, where is the Syndicate in all of this melee stuff? Why is it the gang is not there? Instead, Starrick should have been killed by a standard overhead assassination by both of them simultaneously through instant decapitation. I’d have preferred if Greenie had been in on the action and then have all three of them take Starrick out. Even the most healing shroud in the world couldn’t heal a severed head… and it should have been done in one big maneuver by both or all three of the assassins at once. That would have been an ending befitting of the name Assassin’s Creed.

Recommendation: Rent

Rant Time: You gotta hate Lollipop

Posted in Android, botch, business by commorancy on May 27, 2015

You know, I can’t understand the predilection for glaring white background and garish bright colors on a tablet. In comes Lollipop trying to act all like iOS and failing miserably at it. OMG, Lollipop has to be one of the most garish and horrible UI interfaces that has come along in a very long time. Let’s explore.

Garish Colors on Blinding White

Skeumorphism had its place in the computer world. Yes, it was ‘old timey’ and needed to be updated, but to what exactly? One thing can be said, skeumorphism was at least easy on the eyes. But, Lollipop with its white backgrounds and horrible teals, pinks and oranges? Really? This is considered to be ‘better’? Sorry, but no. A thousand times, no. As a graphic designer and artist, this is one of the worst UI choices for handheld devices.

If, for example, the engineers actually used the light sensor on the damned things and then determined that when it’s dark in the room and then changed the UI to something easier in the dark, I’d be all over that. But, nooooooo. You’re stuck with these stupid blinding white screens even when the room is pitch black. So there you have your flashlight lighting up your face all while trying to use your tablet. I mean, how stupid are these UI designers? You put light sensors on it… use them.

Stupid UI Designers?

Seriously, I’ll take skeumorphism over these blazing white screens any day. I mean seriously? Who in their right mind thought that this in any way looked good? Why rip a page from Apple’s horrible design book when you don’t have to. I’ll be glad when Lollipop is a thing of the past and Google has decided to blaze their own UI way. No Google, you don’t need to follow after Apple.

Just because some asinine designer at Apple thinks this looks good doesn’t mean that it actually does. Get rid of the white screens. Let’s go back to themes so we can choose the way we want our systems to look. Blaze your own path and give users the choice of the look of their OS. Choice is the answer, not forced compliance.

Smaller and Smaller

What’s with the smaller and smaller panels and buttons all of a sudden? At first the pull down was large and fit nicely on the screen. The buttons were easy to touch and sliders easy to move. Now it’s half the size with the buttons and sliders nearly impossible to grab and press. Let’s go back to resizing buttons so they are finger friendly on a tablet, mkay? The notification pulldown has now been reduced in size for no apparent reason. Pop up questions are half the size. The buttons and sliders on there are twice has hard to hit with a finger.

Google, blaze your own path

Apple has now become the poster child of how not to design UI interfaces. You don’t want to rip pages from their book. Take your UI designers into a room and let them come up with ideas that are unique to Google and Android. Don’t force them to use a look and feel from an entirely different company using ideas that are outright horrible.

Note, I prefer dark or grey backgrounds. They are much easier on the eyes than blazing white backgrounds. White screens are great for only one thing, lighting up the room. They are extremely hard on the eyes and don’t necessarily make text easier to read.

Google, please go back to blazing your own trail separately from Apple. I’ll be entirely glad when this garish-colors-on-white-fad goes the way of the Pet Rock. And once this stupid trend is finally gone, I’ll be shouting good riddance from the top of the Los Altos hills. It also won’t be soon enough. For now, dayam Google, get it together will ya?

Tagged with: , , , , ,

Apple’s newest MacBook: Simply Unsatisfying

Posted in Apple, botch, business, california by commorancy on March 12, 2015

macbook_largeIt’s not a MacBook Air. It’s not a MacBook Pro. It’s simply being called the MacBook. Clever name for a computer, eh? It’s not like we haven’t seen this brand before. What’s the real trouble with this system? A single USB-C connector. Let’s explore.

Simplifying Things

There’s an art to simplification, but it seems Apple has lost its ability to rationally understand this fundamental concept. Jobs got it. Oh man, did Jobs get the concept of simplification in spades. Granted, not all of Jobs’s meddling in simplification worked. Like, a computer with only a mouse and no keyboard. Great concept, but you really don’t want to enter text through an on-screen keyboard. This is the reason the iPad is so problematic for anything other than one-liners. At least, not unless there’s some kind of audio dictation system. At the time, the Macintosh didn’t have such a system. With Siri, however, we do. Though, I’m not necessarily endorsing that Apple bring back the concept of a keyboard-less computer. Though, in fact, with a slight modification to Siri’s dictation capabilities, it would be possible.

Instead, the new MacBook has taken things away from the case design. More specifically, it has replaced all of those, you know, clunky, annoying and confusing USB 3.0 and Thunderbolt port connectors that mar the case experience. Apple’s engineers have now taken this old and clunky experience and ‘simplified’ it down to exactly one USB-C port (excluding the headphone jack.. and why do we even need this jack again).

The big question, “Is this really simplification?”

New Case Design

Instead of the full complement of ports we previously had, such as the clever magsafe power port, one or two Thunderbolt ports, two USB 3.0 ports and an SD card slot, now we have exactly one USB-C port. And, it’s not even a well known or widely used port style yet.usb_macbook

Smart. Adopt a port that literally no one is using and then center your entire computer’s universe around this untried technology. It’s a bold if not risky maneuver for Apple. No one has ever said Apple isn’t up for risky business ideas. It’s just odd that they centered it on an open standard rather than something custom designed by Apple. Let’s hope that Apple has massively tested plugging and unplugging this connector. If it breaks, you better hope your AppleCare service is active. And since the unplugging and plugging activity falls under wear-and-tear, it might not even be covered. Expect to spend more time at the Genius bar arguing over whether your computer is covered when this port breaks. On the other hand, we know the magsafe connector is almost impossible to break. How about this unknown USB-C connector? Does it also have the same functional lifespan? My guess is no.

I also understand that the USB-C technology automatically inherits the 10 Gbps bandwidth standard and has a no-confusion-plug-in-either-way connector style. But, it’s not as if Thunderbolt didn’t already offer the same transfer speed, though not the plug-in-either-way cable. So, I’m guessing that this means Thunderbolt is officially dead?

What about the Lightning cable? Apple recently designed and introduced the Lightning connector for charging and data transfer. Why not use the Lightning connector by adding on a faster data transfer standard? Apple spent all this time and effort on this cool new cable for charging and data transfer, but what the hell? Let’s just abandon that too and go with USB-C? Is it all about throwing out the baby with the bathwater over at Apple?

I guess the fundamental question is… Really, how important is this plug-in-either-way connector? Is Apple insinuating that general public is so dumb that it can’t figure out how to plug in a cable? Yes, trying to get the microUSB connectors inserted in the dark (because they only go in one direction) can be a hassle. The real problem isn’t that it’s a hassle, the real problem is that the connector itself was engineered all wrong. So, trying to fit in a microUSB cable into a port is only a problem because it’s metal on metal. Even when you do manage to get it lined up in the right direction, it sometimes still won’t go in. That’s just a fundamental flaw in the port connector design. It has nothing to do with directionality of it. I digress.

Fundamentally, the importance of a plug-in-either-way cable should be the lowest idea on the agenda. What should be the highest idea is simplifying to give a better user experience overall and not to hobble the computer to the point of being unnecessarily problematic.

Simply Unsatisfying

Let’s get into the meat of this whole USB-C deal. While the case now looks sleek and minimal, it doesn’t really simplify the user experience. It merely changes it. It’s basically a shell game. It moves the ball from one cup to another, but fundamentally doesn’t change the ball itself. So, instead of carrying only a power adapter and the computer, you are now being forced to carry a computer, power adapter and a dock. I fail to see exactly how this simplifies the user experience at all? I left docks behind when I walked away from using Dell Notebooks. Now, we’re being asked to use a dock again by, of all companies, Apple?

The point to making changes in any hardware (or software) design is to help improve the usability and user experience. Changing the case to offer a single USB-C port doesn’t enhance the usability or user experience. This is merely a cost cutting measure by Apple. Apple no longer needs to add pay for all of these arguably ‘extra’ (and costly) ports to the case. Removing all of those ‘extraneous’ ports now means less cost for the motherboard and die-cuts on the case, but at the expense that the user must carry around more things to support that computer. That doesn’t simplify anything for the user. It also burdens the user by forcing the user to pay more money for things that were previously included in the system itself. Not to mention, requiring the user to carry around yet more dongles. I’ve never ever known Apple to foist less of an experience on the user as a simultaneous cost cutting and accessory money making measure. This is most definitely a first for Apple, but not a first for which they want to become known. Is Apple now taking pages from Dell’s playbook?

Instead of walking out of the store with a computer ready in hand, now you have to immediately run to the accessory isle and spend another $100-200 (or more) on these ‘extras’. Extras, I might add, that were previously included in the cost of the previous gen computers. But now, they cost extra. So, that formerly $999 computer you bought that already had everything you needed will now cost you $1100-1200 or more (once you consider you now need a bag to carry all of these extras).

Apple’s Backward Thinking?

I’m sure Apple is thinking that eventually that’s all we’ll need. No more SD cards, no more Thunderbolt devices, no more USB 3 connectors. We just do everything wirelessly. After all, you have the (ahem) Apple TV for a wireless remote display (which would be great if only that technology didn’t suck so bad for latency and suffer from horrible mpeg artifacting because the bit rate is too low).

Apple likes to think they are thinking about the future. But, by the time the future arrives, what they have chosen is already outdated because they realized no one is actually using that technology other than them. So, then they have to resort to a new connector design or a new industry standard because no other computers have adopted what Apple is pushing.

For example, Thunderbolt is a tremendous idea. By today, this port should have been widely used and widely supported, yet it isn’t. There are few hard drives that use it. There are few extras that support it. Other than Apple’s use of this port to drive extra displays, that’s about the extent of how this port is used. It’s effectively a dead port on the computer. Worse, just about the time where Thunderbolt might actually be picking up steam, Apple dumps it in lieu of USB-C which offers the same transfer speeds. At best, a lateral move technologically speaking. If this port had offered 100 Gbps, I might not have even written this article.

Early Adopter Pain

What this all means is that those users who buy into this new USB-C only computer (I intentionally forget the headphone jack because it’s still pointless), will suffer early adopter pains with this computer. Not only will you be almost immediately tied to buying Apple gear, Apple has likely set up the USB-C connector to require licensed and ID’d cables and peripherals. This means that if you buy a third party unlicensed cable or device, Apple is likely to prevent it from working, just as they did with unlicensed Lightning cables on iOS.

This also means that, for at least 1-2 years, you’re at the mercy of Apple to provide you with that dongle. If you need VGA and there’s no dongle, you’re outta luck. If you need a 10/100 network adapter, outta luck. This means that until or unless a specific situational adapter becomes available, you’re stuck. Expect some level of pain when you buy into this computer.

Single Port

In addition to all of the above, let’s just fundamentally understand what a single port means. If you have your power brick plugged in, that’s it. You can’t plug anything else in. Oh, you need to run 2 monitors, read from an SD card, plug in an external hard drive and charge your computer? Good luck with that. That is, unless you buy a dock that offers all of these ports.

It’s a single port being used for everything. That means it has a single 10 Gbps path into the computer. So, if you plug in a hard drive that consumes 5 Gbps and a 4k monitor that consumes 2 Gbps, you’re already topping out that connector’s entire bandwidth into the computer. Or, what if you need a 10 Gbps Ethernet cable? Well, that pretty much consumes the entire bandwidth on this single USB-C connector. Good luck with trying to run a hard drive and monitor with that setup.

Where an older MacBook Air or Pro had two 5 Gbps USB3 ports and one or two 10 Gbps Thunderbolt ports (offering greater than 10 Gbps paths into the computer), the new MacBook only supports a max of 10 Gbps input rate over that single port. Not exactly the best trade off for performance. Of course, the reality is that the current Apple motherboards may not actually be capable of handling 30 Gbps input rate, but it was at least there to try. Though, I would expect that motherboard to handle an input rate greater than 10.

With the new MacBook, you are firmly stuck to a maximum input speed of 10 Gbps because it is a single port. Again, an inconvenience to the user. Apple once again makes the assumption that 10 Gbps is perfectly fine for all use cases. I’m guessing that Apple hopes the users simply won’t notice. Technologically, this is a step backward, not forward.

Overall

In among the early adopter problems and the relevancy problems that USB-C has to overcome, this computer now offers a more convoluted user experience. Additionally, instead of offering something that would be truly more useful and enhance the usability, such as a touch screen to use with an exclusive Spotlight mode, they opted to take this computer in a questionable direction.

Sure, the case colors are cool and the idea of a single port is intriguing, it’s only when you delve deep into the usefulness of this single port does the design quickly unravel.

Apple needs a whole lot of help in this department. I’m quite sure had Jobs been alive that while he might have introduced the simplified case design, it would have been overshadowed by the computer’s feature set (i.e., touch screen, better input device, better dictation, etc). Instead of trying to wow people with a single USB-C port (which offers more befuddlement than wow), Apple should have fundamentally improved the actual usability of this computer by enhancing the integration between the OS and the computer.

The case design doesn’t ultimately much matter, the usability of the computer itself matters. Until Apple understands that we don’t really much care what the case looks like as long as it provides what we need to compute without added hassles, weight and costs, Apple’s designers will continue running off on these tangents spending useless cycles attempting to redesign minimalist cases that really don’t benefit from it. At least, Apple needs to understand that there is a point of diminishing returns when trying to rethink minimalist designs…. and with this MacBook design, the Apple designers have gone well beyond the point of diminishing returns.

Technology Watch: Calling it — Wii U is dead

Posted in botch, video game design, video gaming by commorancy on June 10, 2013

I want Nintendo to prove me wrong. I absolutely adore the Wii U system and its technology. The Gamepad is stellar and it feels absolutely perfect in your hands. It just needs a better battery. The battery life sucks. There’s no doubt about it, the Wii U is an amazing improvement over the Wii. So what’s wrong with it?

Titan Tidal Forces

There are many tidal forces amassing against the Wii U which will ultimately be its demise. In similarity to the amazing Sega Dreamcast and, before that, the Atari Jaguar, the Wii U will likely expire before it even makes a dent in the home gaming market. Some consoles just aren’t meant to be and the Wii U, I’m calling it, will be discontinued within 12 months in lieu of a newly redesigned and renamed ‘innovative’ Nintendo console.  Let’s start with the first tidal force…

What Games?

Nintendo just cannot seem to entice any developer interest in porting games to the Wii U, let alone creating native titles. With such big game franchises as Bioshock Infinite, Grand Theft Auto V, Saints Row 3 and Deadpool (Activision, surprisingly) side-stepping the Wii U, this tells me that at least Rockstar and Activision really don’t have much interest in producing titles for this console. Even such bigger titles like Call of Duty, which did make it to the Wii U, didn’t release on the same day as the PS3 and Xbox versions.  Call of Duty actually released later, as did The Amazing Spider-Man.

Worse, Nintendo doesn’t really seem committed to carrying any of its own franchises to this console in any timely fashion. To date, there is still not even an announcement for a native Zelda for Wii U. Although, we’re not yet past E3, so I’ll wait to see on this one. My guess is that there will be a Zelda, but it will likely fall far shy of what it should or could have been.

Basically, there are literally no upcoming game announcements from third party developers. And there’s especially nothing forthcoming from the big franchises on the Wii U (other than Ubisoft’s Assassin’s Creed IV, which is likely to be just another mashup and rehash). Yes, there are a number of b-titles and ‘family’ titles, but that’s what Nintendo is always known for.

Sidestepped, but why?

I see titles like Grand Theft Auto V, Saint’s Row 3, Destiny and Deadpool where there is no mention of a Wii U version. For at least GTA5 and Saint’s Row, these developers likely had well enough of a lead time to be able to create a Wii U version. So, what happened? Why would these games not be released for the Wii U?  I think it’s very clear, these developers don’t think they can recoup their investment in the cost needed to produce the game for that console. That doesn’t mean that the games won’t be ported to the Wii U six months after the Xbox, PS3 and PC releases. But then, what’s the incentive to play a 6 month old game? I don’t want to pay $60 for has-beens, I want new games to play.

Hardcore gamers want the latest at the moment when it’s released. Not six months after other consoles already have it. As a hardcore gamer, I don’t want to wait for titles to release. Instead, I’ll go buy the an Xbox or a PS so I can play the game when it’s released, not wait 6-9 months for a poorly ported version of the game.

Competition

With the announcement of both Sony’s PS4 (*yawn*) and the Microsoft’s Xbox One ( :/ ), these two consoles together are likely to eclipse whatever hope the Wii U has of gaining the hardcore gaming element. In fact, it’s likely that Sony’s PS4 is already dead as well, but that’s another story. Also, with the lackluster announcement of the Xbox One, we’ll just have to wait and see.  Needless to say, people only have so much money to spend on hardware and only one of these consoles can really become dominant in the marketplace. For a lot of reasons to be explored later in this article, Nintendo’s Wii U cannot survive with the course it is presently on.

I can’t really call which is the bigger yawn, PS4 or Xbox One, but both have problems. Namely, no compatibility to previous console games which really puts a damper on both of these next gen consoles. Maybe not enough for either of them not to become successes in 5 years, but immediate adoption is a concern. Available launch titles will make or break these new consoles as backwards compatibility is not available. Meaning, without launch titles, there’s literally nothing to play (other than Netflix, which you can pay far less than the price of a console to get.. i.e., Roku). For competition alone, this is a huge tidal force against Nintendo that will ultimately keep the Wii U in third place, if not outright dead.

Let’s not forget the nVidia Shield based on Android that is as yet an unknown quantity. Although, the way it is currently presented with the flip up screen and the requirement to stream games to the unit from a PC is a big downer on the usability of this system as a portable. I don’t believe nVidia’s approach will succeed. If you’re a portable system, then it needs to be truly portable with native games. If you’re a console, then make it a console and split the functionality into two units (a controller and a base unit).  The all-in-one base unit and controller, like the Shield, isn’t likely to be successful or practical.  The attached screen, in fact, is 1) fragile and likely to break with heavy usage and 2) make it hard to play games because the screen shakes (loosening the hinge) when you shake the controller.  For the PS Vita, it works okay. For the Shield that still requires a PC to function, this isn’t a great deal, especially at the $350 price tag.

Nintendo Itself

Nintendo is its own worst enemy. Because it has always pushed and endorsed ‘family friendly’ (all age) games over ‘hardcore’ (17+ aged) games, the Wii U has pushed Nintendo into an extremely uncomfortable position. It must now consider allowing extremely violent, bloody, explicit language games into the Wii U to even hope to gain market share with the hardcore 17-34 aged gamers.  In other words, Nintendo finally has to grow up and make the hard decision. Is it or isn’t it a hardcore gamer system?  Nintendo faces this internal dilemma which leaves the Wii U hanging in the balance.

It’s clear that most already released titles have skirted this entire problem. Yes, even Call of Duty and Zombie U do mostly. Assassin’s Creed III is probably the hardest core game on the system and even that isn’t saying much.

Game developers see this and really don’t want to wrestle with having to ‘dumb down’ a game to Nintendo’s family friendly standards.  If I were a developer, I’d look at the Wii U and also ask, “Why bother?” Unfortunately, this is a catch-22 problem for Nintendo. Meaning, Nintendo can’t get people to buy the system without titles, but Nintendo can’t rope in developers to write software without having an audience for those titles. The developers just won’t spend their time writing native titles for a system when there’s not enough users to justify the expense of the development.

Worse, the developers realize they will also have to provide a ‘dumbed down’ version for the Nintendo platform to placate Nintendo’s incessant ‘family friendly’ attitude. For this reason, Nintendo can’t turn the Wii U into a hardcore system without dropping these unnecessary and silly requirements for hardcore games. Nintendo, as a word of advice, just let the developers write and publish the game as it is. Let the ratings do the work.

Bad Marketing

For most people, the perception is that the Wii U is nothing more than a slightly different version of the Wii. The marketing was all wrong for this console. Most people’s perceptions of this system are completely skewed. They really don’t know what the Wii U is other than just being another Wii. This issue is cemented by naming the system the ‘Wii U’.  It should have had an entirely different name without the word ‘Wii’. Unfortunately, the Wii was mostly a fad and not a true long-lasting gaming system. It picked up steam at first not because it was great, but because people latched onto the group gaming quality. For a time, people liked the ‘invite people over for a party’ quality of the Wii. This group gaming quality was something no other gaming system had up to that point. Then came the Kinect and the Move controllers and competition wiped that advantage out.

The Wii U design has decidedly dropped the idea of group gaming in lieu of the Gamepad which firmly takes gaming back to a single player experience. Yes, the Wii U does support the sensor bar, but few Wii U games use it. Worse, the Wii U doesn’t even ship with the Wiimote or Nunchuk, firmly cementing the single player experience. Only Wii compatible games use the sensor bar for the multiple player experience. Because of the focus back to single player usage, this again says Nintendo is trying to rope in hardcore gamers.

Unfortunately, the marketing plan for the Wii U just isn’t working. The box coloring, the logo, the name and the way it looks seems like a small minimal upgrade to the Wii. Until people actually see a game like Batman Arkham City, the Amazing Spider-Man or Call of Duty actually play on the Wii U, they really don’t understand what the ‘big deal’ is. Worse, they really don’t see a need replace their aging Wii with this console knowing that they rarely play it at this point anyway. So, when the Wii U was released, the average Wii user just didn’t understand the Wii U appeal. The Wii U marketing just didn’t sell this console to either the family audience or to the hardcore gamer correctly.

Bad Controller Button Placement

The final piece of this puzzle may seem insignificant, but it’s actually very significant to the hardcore game player. Because the PS3 and the Xbox map action buttons identically to the controller across games, you always know that when you press A, it’s going to do the same thing on the Xbox or the PS3.  So, you can move seamlessly between either console and play the same game without having so shift your button pressing pattern. In other words, you can play blind because the button location+action is identical between the Xbox and the PS3.  The buttons placement is then as follows:

Y/Triangle = 12 o’clock, B/Circle = 3 o’clock, A/X = 6 o’clock, X/Square = 9 o’clock (Xbox / PS3)

The actions of Y and Triangle are the same between the systems.  The actions of B and Circle are the same and so on. If you play Call of Duty on PS3 or Xbox, you always press the button at the 6 o’clock position to perform the same action.

The Wii U designers decided to place the buttons in opposition to the Xbox & PS3. The button placement for Wii U:

X = 12 o’clock, A = 3 o’clock, B = 6 o’clock, Y = 9 o’clock (Wii U)

This button placement would be fine if A (3 o’clock) on the Wii performed the same action as the B/Circle (3 o’clock position) on the Xbox and PS3. But, it doesn’t. Instead, because the Wii’s controller is labeled ‘A’ (3 o’clock position), it has the same function as the ‘A/X’ (6 o’clock position) button the Xbox and PS3. The ‘B’ button at (6 o’clock) matches the B/Circle (3 o’clock) on the Xbox/PS3. This means that you have to completely reverse your play on the Wii U and retrain yourself to press the correct button. This means you can’t play blind. This is a difficult challenge if you’ve been playing game franchises on the Xbox for 10 years with the Xbox/PS3 button and action placement. This would be like creating a reversed QWERTY keyboard so that P starts on the left and Q ends on the right and handing it to a QWERTY touch typist.  Sure, they could eventually learn to type with keys in this order, but it’s not going to be easy and they’re going to hit P thinking it’s Q and such for quite a while.

For hardcore Xbox gamers, making the switch to the Wii U is a significant controller retraining challenge. When I replayed Assassin’s Creed III, I was forever hitting the button at the 6 o’clock position thinking it was the A button because that’s the position where it is on the Xbox and PS3. Same for the reversed X and Y.  By the end of Assassin’s Creed III, I had more or less adapted to the Wii U’s backwards controller, but I made a whole lot of stupid mistakes along the way just from this button placement issue alone.

Either the games need to support Xbox/PS3 alternative action placement compatibility or the Wii U needs to sell a controller that maps the buttons identically to the Xbox and PS3. I personally vote for a new controller as it doesn’t require game designers to do anything different. This button placement issue alone is a huge hurdle for the Wii U to overcome and one that is a needlessly stupid design when you’re trying to entice Xbox or PS3 gamers to your platform. I don’t want to relearn a new controller design just to play a game. Ergonomics is key in adoption and this is just one big Nintendo ergonomics design fail. For the Wii, that button placement was fine. For the Wii U, the controller needs to identically map to the PS3 and Xbox button/action layout to allow for easy and widespread adoption.

Death of the Wii U

Unfortunately, due to the above factors, Nintendo will struggle to keep this console afloat before it finally throws in the towel to the Xbox One and the PS4. Worse, the Wii U really doesn’t have a niche. It lost its fad group gaming image over a year ago when people stopped buying the Wii for that purpose. Those who did use it for that shoved it into a closet. The Wii U may have been somewhat positioned to become a hardcore system, but due to poor controller button placement, lack of quality developers producing hardcore titles, the Wii U’s silly user interface, Nintendo’s antiquated ‘family friendly’ attitudes and Nintendo itself placing silly requirements on titles to reduce violence and language as part of that antiquated attitude, the Wii U doesn’t really have a market. It just doesn’t appeal to the hardcore gamers. So what’s left? Zelda and Mario and that’s not enough to invest in the Wii U.

Just looking at the titles presently available for the Wii U, at least 85% of which were original launch titles (most of which were ported from other consoles).  In combination with the new fall console hardware releases plus hardcore titles for existing consoles that completely sidestep the Wii U, Wii U just cannot succeed without some kind of major miracle out of Nintendo.

I full well expect to hear an announcement from Nintendo dropping the Wii U, not unlike Sega’s announcement to pull the plug on the Dreamcast so early into its console life.

Flickr’s new interface review: Is it time to leave Flickr?

Posted in botch, cloud computing, computers, social media by commorancy on May 21, 2013

New Flickr InterfaceYahoo’s Flickr has just introduced their new ’tile’ interface (not unlike Windows Metro tiles) as the new user interface experience. Unfortunately, it appears that Yahoo introduced this site without any kind of preview, beta test or user feedback. Let’s explore.

Tile User Experience

The tiles interface at first may appear enticing. But, you quickly realize just how busy, cluttered, cumbersome and ugly this new interface is when you actually try to navigate and use it. The interface is very distracting and, again, overly busy. Note, it’s not just the tiles that are the problem. When you click an image from the tile sheet, it takes you to this huge black background with the image on top. Then you have to scroll and scroll to get to the comments.  No, not exactly how I want my images showcased. Anyway, let me start by saying that I’m not a fan of these odd shaped square tile interfaces (that look like a bad copycat of a Mondrian painting). The interface has been common on the Xbox 360 for quite some time and is now standard for Windows Metro interface. While I’ll tolerate it on the Xbox as a UI, it’s not an enticing user experience. It’s frustrating and, more than that, it’s ugly. So, why exactly Yahoo decided on this user interface as their core experience, I am completely at a loss…. unless this is some bid to bring back the Microsoft deal they tossed out several years back. I digress.

Visitor experience

While I’m okay with the tiles being the primary visitor experience, I don’t want this interface as my primary account owner experience. Instead, there should be two separate and distinct interfaces. An experience for visitors and an experience for the account owner.  The tile experience is fine for visitors, but keep in mind that this is a photo and art sharing site.  So, I should be able to display my images in the way I want my users to see them.  If I want them framed in black, let me do that. If I want them framed in white, let me do that. Don’t force me into a one-size-fits-all mold with no customization. That’s where we are right now.

Account owner experience

As a Flickr account owner, I want an experience that helps me manage my images, my sets, my collections and most of all, the comments and statistics about my images. The tile experience gives me none of this. It may seem ‘pretty’ (ahem, pretty ugly), but it’s not at all conducive to managing the images. Yes, I can hear the argument that there is the ‘organizr’ that you can use. Yes, but that’s of limited functionality. I preferred the view where I can see view numbers at a glance, if someone’s favorited a photo, if there are any comments, etc.  I don’t want to have to dig down into each photo to go find this information, I want this part at a glance.  Hence, the need for an account owner interface experience that’s separate from what visitors see.

Customization

This is a photo sharing site. These are my photos. Let me design my user interface experience to match the way I want my photos to be viewed. It is a gallery after all. If I were to show my work at a gallery, I would be able to choose the frames, the wall placement, the lighting and all other aspects about how my work is shown. Why not Flickr? This is what Flickr needs to provide. Don’t force us into a one-size-fits-all mold of something that is not only hideous to view, it’s slow to load and impossible to easily navigate.  No, give me a site where I can frame my work on the site. Give me a site where I can design a virtual lighting concept.  Give me a site where I can add virtual frames. Let me customize each and every image’s experience that best shows off my work.

Don’t corner me into a single user experience where I have no control over look and feel. If I don’t like the tile experience, let me choose from other options. This is what Flickr should have been designing.

No Beta Test?

Any site that rolls out a change as substantial as what Flickr has just pushed usually offers a preview window.  A period of time where users can preview the new interface and give feedback. This does two things:

  1. Gives users a way to see what’s coming.
  2. Gives the site owner a way to tweak the experience based on feedback before rolling it out.

Flickr didn’t do this. It is huge mistake to think that users will just silently accept any interface some random designer throws out there. The site is as much the users as it is Yahoo’s. It’s a community effort. Yahoo provides us with the tools to present our photos, we provide the photos to enhance their site. Yahoo doesn’t get this concept. Instead, they have become jaded to this and feel that they can do whatever they want and users will ‘have’ to accept it. This is a grave mistake for any web sharing site, least of all Flickr. Flickr, stop, look and listen. Now is the time.

Photo Sharing Sites

In among Flickr, there are many many photo sharing sites on the Internet. Flickr is not the only one. As content providers, we can simply take our photos and move them elsewhere. Yahoo doesn’t get this concept. They think they have some kind of captive audience. Unfortunately, this thinking is why Yahoo’s stock is now at $28 a share and not $280 a share. We can move our photos to a place where there’s a better experience (i.e., Picasa, DeviantArt, Photobucket, 500px, etc). Yahoo needs to wake up and realize they are not the only photo sharing site on the planet.

Old Site Back?

No, I’m not advocating to move back to the old site. I do want a new user experience with Flickr. Just not this one. I want an experience that works for my needs. I want an interface that let’s me showcase my images in the way I want. I want a virtual gallery that lets me customize how my images are viewed and not by using those hideous and slow tiles.  Why not take a page from the WordPress handbook and support gallery themes. Let me choose a theme (or design my own) that lets me choose how to best represent my imagery. This is the user experience that I want. This is the user experience I want my visitors to have. These are my images, let me show them in their best light.

Suggestions for @Yahoo/@Flickr

Reimagine. Rethink. Redesign. I’m glad to see that Yahoo is trying new things. But, the designers need to be willing to admit when a new idea is a failure and redesign it until it does work. Don’t stop coming up with new ideas. Don’t think that this is the way it is and there is nothing more. If Yahoo stops at this point with the interface as it is now, the site is dead and very likely with it Yahoo. Yahoo is very nearly on its last legs anyway. Making such a huge blunder with such a well respected (albeit antiquated site) could well be the last thing Yahoo ever does.

Marissa, have your engineers take this back to the drawing board and give us a site that we can actually use and that we actually want to use.

Tagged with: , , , , , , , ,

Bad Operating System Design Ideas Part 1

Posted in Apple, Mac OS X, windows by commorancy on November 6, 2011

Here is a new series I am putting together.  While we all need to use operating systems every day, there are lots of stupid ideas that abound on these devices that some developer thought would be ‘cool’.  Let’s explore these design ideas and why they’re stupid.  Let’s start with one company that people seem to think can do no wrong.. Apple.  Yeah, I could start with the easiest target, Windows, but I’ll save the best (er.. easiest) for last. 🙂

Apple isn’t immune

Spring Loaded Folders

Apple’s OS X most definitely has some quirky and, frankly, stupid design ideas that simply need to go away.  For very good reason, the first on this list is spring loaded folders. This is one design idea that breaks EVERY UI rulebook.  This functionality moves windows around under the cursor to unexpected places during, of all times, when you’re about to drop a file or folder on it.  It’s almost like some kind of bizarre practical joke. I mean, if this isn’t the absolute worst idea, I don’t know what is.  I’m not even sure what they were thinking at the time of conception, but windowing operating systems should never ever move windows or cursors automatically.  Let the user move things if they want them moved.  Worse, the idea of spring loaded folders has nothing at all to do with moving the windows around.  The spring loaded folder is supposed to open a folder when you are dragging and hovering over to the top of a folder name.  While opening a new window in the middle of holding drag-and-drop operation may seem like a great idea, it’s really obvious why this UI concept doesn’t work:  it will lead to dropping folders into the wrong place.  I don’t even want to say how many times I’ve inadvertently lost folders and files as a result of spring loaded folders. Yes, at least you can turn it off and it should be off by default.

Android isn’t immune

There is no easy way to manage running applications (at least not in 2.2) or really any other settings.  You have to dig through the ‘Settings’ area to get to Applications and then manage them from there after a few drill downs.  Same with most settings.  This is a mobile device.  These things need EASY and FAST access.  Digging through 5 menus to get to the Bluetooth area is both wasteful and dangerous while driving.  Let’s get these things front and center with one click.

IOS isn’t immune

Dragging icons from screen to screen to move them is near impossible.  Most times it drops onto the current screen at the edge.  You then have to pick it up and drag it again.  It would be far simpler to show a representation of all of the screens at once and then drop it onto the screen you want it on.  You can then put it in the exact location later.

Windows isn’t immune (but who said it was?)

When you’re hovering over a scrollable area of an Explorer window, you have to click to activate before you can scroll.  The trouble is, there is no empty place to click that doesn’t activate something.  If you’re hovering over the folder area, whatever you click on will activate.  If you’re in the files area, the same thing.  This is magnified when the Explorer window also happens to be a file requester.  So, you’re trying to scroll to the bottom of the files area.  If you click anywhere in the files area, it will fill in the filename with the file you have just clicked.  Annoying.  I don’t know why Windows can’t just realize the mouse pointer is over that area and activate at least the scrolling part.  There really should be no click necessary.

Why Windows can’t remember my folder settings in Windows 7, I have no idea.  Getting rid of the Quick Launch bar, bad idea.  Turning the ‘Start’ button into the ‘Windows’ button, stupid (at least from a support perspective).  Can we at least keep some consistency from one OS to the next?

These are my initial pet peeves.  There are tons more that have yet to be documented.  I will highlight these in part two of this series.

Enjoy (and comment if you have peeves of your own).

Apple’s bleeding edge

Posted in Apple by commorancy on May 1, 2011

Apple loves to adopt brand new bleeding edge technologies and shun existing functional and supported technologies.  Case in point, Apple’s new MacBook Pro line sports a new Thunderbolt (Lightpeak) port. So, yeah, while this port is capable of 10Gb per second, there are no peripherals yet available for this technology.  But, instead of placing USB 3 ports (capable of 5Gb per second) onto the MacBook Pro, they instead decided to skip this recent technology.  So, the MacBook Pro comes shipped with dog slow USB 2.0 ports running at a whopping 480Mb per second.  That’s ok if the only thing you want to transfer is sync data to your iPhone or iPad. For hard drives, this speed is unbearably slow.

Apple’s own stupidity

We don’t want ports with no peripheral support.  We want ports that are actually supported.  Simply because Apple has adopted the Thunderbolt technology doesn’t mean that it will in any way become a standard.  In fact, Apple’s bleeding edge adoption of the Thunderbolt port is about as risky as the Firewire (1394) port was way back when. And, where is Firewire now?  Dead.

I just don’t get why you would stick old technology on a brand new notebook when new technology already exists?  There are many USB 3 adapters and peripherals that could easily get users faster speeds until (or if) Thunderbolt actually takes off.

Apple needs to wake up and realize we want to connect fast drives to external ports.  So, at least give us ports where we can do this.  Sure, LaCie and other manufacturers will likely start making Thunderbolt compatible drive enclosures, but they probably won’t hit stores for months or possibly even as late as 2012. Until then, we have to live with USB 2.0 ports that suck rocks for speed.

Thanks Apple.

Tagged with: ,

iPad or iPod?

Posted in Apple by commorancy on May 12, 2010

If you’re considering an iPad purchase, you’ll want to contemplate the following before you buy. The iPad has several ergonomic design problems that really prevent it from being truly hand and body friendly. Let’s explore and then compare that with the iPod Touch.

Curved back

All of Apple’s mobile products tend to have a curved back (excluding notebooks). I guess they like this design because they keep producing it. In the iPod, this isn’t so much of an issue. With the iPad, however, the curved back is a problem. If you lay it down, it wobbles and spins. So, if you want to put it on a surface, it will have to be a soft surface (a pillow, rug or other conforming surface). If you place it onto a hard surface, you’ll need to be prepared for the wobble. You will have a similar problem with an iPod, but if you add a case, you can somewhat manage this issue.

Higher power requirements

To operate an iPad, it needs a higher power requirement to charge and use it when plugged in. So, you may find that some notebooks cannot charge the iPad when docked. You may have to plug it into an outlet to adequately charge the iPad. With the iPod, however, it requires a much smaller power footprint, so charging off of a USB port is not a problem.

Weight

The iPad is heavy. It may only weigh in at 1.5 or 1.6 pounds with 3G, but when you’re holding that in your hand for a while, it does start to get heavy. So, don’t expect for this weight to remain comfortable in your hand for long. That means, for a book reading experience where you might want to hold it for several hours, you’re going to have to find a way to prop it up. The iPod touch is a comfortable weight and fits in the hand nicely.

Kickstand (or lack thereof)

It’s quite clear that due to the curvature of the back and the weight of this device, it desperately needs a kickstand to hold it in a proper position and still allow it to be touchable. Without such an accessory, the iPad quickly becomes unwieldy and clumsy (as if it wasn’t clumsy already).

Design

Some people think the design is sleek and simple. I’m not really convinced of that. The thick black border looks dated. The curved back prevents it from sitting flat. The weight of it is too heavy. The battery only lasts 10 hours and requires a higher power charging adapter. So, don’t expect to plug it into your old iPhone or iPod chargers and have it work on the iPad. It might power it, but it’s not going to charge it.

The touch interface is both at once sleek and cumbersome. It works, but in some cases it doesn’t (when wearing gloves). The glossy screen looks slick until you have thousands of smeary fingerprints and oil all over it. Then it’s just gross.

It’s not truly a portable device. The physical size of the device precludes putting it in your pocket. So, you have to carry it around in a case. It may look cool when you take it out to use it, but it’s still clumsy and big. If I’m going to carry around a device of this size, I would prefer to carry around a netbook with a real keyboard and real mouse pad.

Price

The iPad is effectively Apple’s netbook. They didn’t want to do a netbook, so they compromised by producing a large iPod touch (the iPad). This device has a larger screen and bigger touch surface, but that also means it has more chances of breakage if bumped, jarred or dropped. So, if you buy one, you need a case for it.

Overall

The iPad’s design is a bit clumsy. It tries to improve on the iPod touch, but the only thing that is really an improvement is the screen size. If Apple would release a 3G iPod Touch or a paperback book sized iPad with 3G, I might actually consider one. The iPad’s current size is too big and needs to be scaled back. The weight needs to be about a quarter of the iPad (or less). A smaller screen means that it’s probably harder to break. Finally, the price needs to get down to $250 or so. Right now, the price is too high at $629 with 3G for a glorified iPod touch. If it had a full Mac OS X system on it, then it could be worth it.

Apple’s got a lot of work to be done before the next iteration of the iPad. Let’s hope the device actually succeeds. I’m just not so sure of that with past tablet device successes.

Tagged with: , , , ,

Video game designers stuck in a rut

Posted in video game, video game design, video gaming by commorancy on March 4, 2009

Video game consoles, such as the PS3, Wii and Xbox 360 (and even PC’s) have gotten more complex and provide impressive 3D capabilities and 5.1 sound.  Yet, video games have not.  There was a time many years ago when video game designers would take chances and create unique and unusual titles.  Games that challenge the mind and challenge the video gamer’s thought processes.  Games used to be fun to play.

In recent years…

Today, most games fall into a very small subset of genres: First/Third Person Shooter, Fighting, RPG, Simulation, Sports or Music (with a few lesser genres appearing occasionally).  While the innovation in the hardware continues to progress, the video game designers are not progressing.  Sure, it takes time to get actors into a studio to record tracks.  Sure, it takes time to build and rig up 3D models.  Sure, it takes time to motion capture realistic action to plug into those 3D models.  Yes, it takes time to program all of those complex algorithms to make it all work as a whole. I understand all of that.  But that’s the process, not the innovation.  These are the tools necessary to get the job done.  They are a means to an end and not the end in itself.

Design considerations

For whatever reason, big video game executives have it in their heads that the tried-and-true model sells a video game.  That may be true to some degree, but you can also wear-out-your-welcome with overused techniques.   In other words, when a game title sucks, the word spreads FAST in the video game community.  That can stop a video game’s sales dead.

When starting a new game project, the producer and creative staff need to decide whether or not they are planning on introducing something new and innovative.  First and third person shooters (FPS/TPS) have already been done and done and done and done again ad nauseam.  That’s not to say that yet another TPS or FPS can’t be successful.  It can.. IF there’s something compelling to the game… and that’s a big IF.

Sure, there are video gamers who will play anything they can get their hands on (known as video game fanatics).  But, as a game developer, you can’t rely on these gamers to carry your title to success.  These gamers do not necessarily make up the majority of the game buying public.  As far as myself, I am an much more discriminating buyer.  I simply won’t buy every title that comes along.  I pick and choose the titles based on the styles of games I know that I like to play.  For example, I do not buy turn-based games of any sort.  I don’t care if it’s based on dice rolls or card draws whether in a fighting, FPS or RPG game.  I won’t buy them because turn-based games get in the way of actual playing.  Turn-based games also tend to be antiquated.  I understand where turn-based play came from (i.e., board games).  But, it has no place in a 3D world based video game.

Again, choosing to add turn-based play into your game is your decision as a developer.  But, by doing so, you automatically exclude gamers who won’t buy turn-based games, like myself.  There are gamers who do enjoy turn-based games, but I don’t know of any gamers who won’t buy real-time play styles and buy only turn-based.  So, you automatically limit those who purchase your game to those who buy turn based.  But, by making your game real-time, you include a much bigger audience.

These are up-front design considerations that, as a developer and producer, you need to understand about gamer buying habits.  These are decisions that can directly affect the success of your video game title.

Previous innovations

In the early days of 3D console games (mid-80s through mid-90s), game developers were willing to try new and unusual things.  Of course, these were the days when 3D was limited to flat untextured surfaces.  We’ve come a long way in the graphics arena.  But, even as far as we’ve come in producing complex and unusual 3D worlds within the games, the play styles have become firmly stagnant.  For example, most First/Third person shooters today rely on a very linear story to get from point A to point B.  Driving the game along is an invisible path.  So, while the complex 3D world is wonderfully constructed, the character can only see the world from a limited vantage point.   The cameras are usually forced to be in one spot (near or behind the character).  The character is forced to traverse the world through a specific path with invisible boundaries.   So, exploration of the world is limited to what the game designer and story allow you to do.

This style of game is very confining.  It forces the gamer to play the game on the programmer’s terms rather than on the gamer’s terms.  Worse, when this play style is combined with checkpoint saves, health meters and other confining aspects, these games can easily become tedious and frustrating.  So, what a game developer may consider to be ‘challenging’, in reality becomes frustration.

A shot of new innovation

The video game development world needs is to open is collective eyes.  Don’t rely on the tried-and-true.  Don’t relay on formulas.  Don’t assume that because a previous game worked that your next game will also work.  What works is what video gamers like.  What doesn’t work is what video gamers don’t like.  The video game community is very vocal, so listen to your audience and learn.  Most of all, try new things… and by that I don’t mean tweaking an existing formula.  I mean, take a risk.  Try something new.  Let gamers explore the world.  Produce worlds that are open and complete.  Let gamers build things.  Let gamers take the game to whole new levels.  Build in construction sets to allow gamers to create things you have never thought of.  Build in ways to save the constructions to web sites and allow gamers to monetize the things they’ve built.

These are innovations that lead to progress.  These are innovations that instill addictiveness into the game.  These are innovations that keep your game alive for years to come.  You only need to look at the popularity of Second Life, World of Warcraft and even the Elder Scrolls series to understand that an unlimited world with construction kits allow gamers to take the game into directions you’ve never even thought of.

Most games play through in only a few weeks (sometimes less than 1 week).  The gamer buys it, plays it through and then trades it in never to touch it again.  This is effectively a movie rental.  So, once the gamers have had their fill, the game is effectively dead.  This style of game does not provide your company with a continued stream of revenue from that title.  Only titles that have open ends, that offer expansion packs, and that allow gamers to construct things on their own are the games that keep a title alive for years rather than a few weeks.

 That may require a slightly bigger cash outlay in the beginning (to support a title that has a longer lifespan), but if done correctly, should also provide much more income for that game company.  This is why titles like Fallout 3, Oblivion: Elder Scrolls IV and World of Warcraft are talked about months (and even years) after the game’s initial release.  But, forgettable games like Fracture, Too Human or even Force Unleashed have no extra play value after the game ends.

Gaming elements incorrectly used

In too many game designs, programming elements are used incorrectly to ‘challenge’ the gamer.  Game challenges should come in the form of story elements, puzzles, clues and riddles.  Game challenge elements should not involve game saving, turn-based play, checkpoints, character deaths, camera movement, controller button sequences, or anything dealing with the real-world physicality of the gaming system.  In other words, challenges should not be tied to something outside of the video game or outside of the story.  So, as a designer.. you should always ask yourself:  Does this challenge progress the game story forward?  If the answer is no, the challenge is a failure.  If yes, then the story becomes better by the challenge.

Button Sequences

For example, requiring the gamer to respond to a sequence of button presses in a very specific real-world time limit is not challenging.  This is frustrating.  This means the gamer needs to trial-and-error this section until they can make it through the timed sequence of buttons.  This is a failed and incorrectly used ‘challenge’ event.  This section does not challenge.  Instead, this sequence requires the gamer to ‘get through’ that section.  Note that ‘getting through’ is not a positive gaming aspect.  Worse, if this game section comes in a FPS game, but only occasionally (only to fight a boss), this is also incorrectly used.  If this play style is used regularly and consistently throughout the game, then the gamer knows that it’s coming.  If it’s used only at certain undisclosed points rarely, then the gamer has to fumble to realize what’s going on when there is no warning.

Death Sequences

Another common, but also incorrectly used gaming element is the character death sequence.  For some reason, recent games have promoted the use of character deaths as part of the challenge element.  So, there are sections of some games where the designers specifically designed the level so the gamer has to ‘die’ his way through the level.  These trial and error sequences, again, are incorrectly used and do not aid in moving the story or the game forward.  These also tend to promote deaths as a way to solve problems.  This is not appropriate.

Games should always promote the positive aspects of life and not promote death as a means to an end.  Worse, games like Too Human take the death sequence to an extreme and make the gamer wait through an excruciatingly long cinematic each time the character dies.  This, again, is an inappropriate use of a gaming element.  The game should be designed for the GAMER and not for the game designer.  Long death sequences such as what’s in Too Human overly emphasizes death.  This is, again, not appropriate.

Health Meters

Health meters are another common gaming element that are incorrectly used, or lack thereof.  Every game that allows the character to ‘die’ needs to have a visible health meter.  Games that use the Unreal engine do not have this.  Instead, when your character takes enough ‘damage’, the screen will become red with a halo.  The problem with this system (and this is also why its incorrectly used) is that the gamer doesn’t know how far from ‘death’ the character is.  This is not a challenge.  This is annoying and frustrating.  This leaves the gamer wondering just how much health they have. 

Game Saves

Again, story elements move the game forward.  Having the gamer stop and reload a game takes the gamer OUT of the game and forces them to restart from some arbitrary point.  Checkpoint games are particularly bad about this.  When checkpoints are the only way to save a game, this means the gamer must waste their real-world time through trial-and-error gaming.  This means, the user must wait through character deaths and then the subsequent reload of the level to restart at the checkpoint.  Again, this is not a challenge… it’s simply a waste of time.   When levels are designed such that the gamer’s character will die at least once to get through the level, the level has failed.  This forces a reload of a previous save.  This element, again, is misued as a challenge element.  Taking the gamer out of the game by forcing a reload ruins the game experience and disrupts the story you, as a developer, worked so hard to make cohesive.

Future of Gaming

Even as game developers are now stuck in the genre rut, they do have the power to break out of it.  They do have the means to produce games with more compelling and addictive content.  Instead of using old formulas that used to work, designers need to look for new ways to innovate,  monetize and bring video gamers into their game worlds and keep them there.  Games shouldn’t be viewed as a short term point A to B entity.  Games need to move to open ended and free exploration worlds.  Worlds that let the gamer play on the gamer’s terms.  Sure, there can be story elements that tie the game together like Fallout 3 and Oblivion.   In fact, I’d expect that.  But, these game threads should start and end inside the game as quests.  You can play them when you want to and you can leave them hanging if you don’t want to complete it.

Game elements like checkpoints, saves and button sequences need to be rethought.   Some of these elements can be successfully used, like checkpoints if implemented thoughtfully.  However, allowing the gamer to save anywhere lets the gamer save and start at their leisure.  But, that manual save process leaves it up to the gamer to remember to save.   For this reason, checkpoints when combined with save-anywhere is the best alternative when gaming.  After all, the game was supposed to be produced for the gamer.

Designers, creators and developers need to challenge the notion of what is a video game.  They need to use the 3D worlds in creative NEW ways.  Let the users explore the worlds on their terms, not on some dictated path and story.   Designers need to take a page from Bethesda’s book on free-roaming RPGs and expand on this.  Closed ended, path based games have limited playability and definitely no replay value.  Monetarily, developers need to understand that open ended construction based games let gamers take ownership of the game and make it their own.  Closed, narrow pathed games do not.

Tagged with: , , , , , , , ,
%d bloggers like this: