Random Thoughts – Randosity!

App-casting vs Screen Casting vs Streaming

Posted in Android, Apple, computers by commorancy on October 8, 2016

A lot of people seem to be confused by these three types of broadcasting software, including using AppleTV and Chromecast for this. I’m here to help clear that up. Let’s explore.

Streaming and Buffering

What exactly is streaming? Streaming is when software takes content (music file, movie file, etc) and sends it out in small chunks from the beginning to the end of the file over a network. While streaming, there is a placeholder point in time entry point to begin watching. In other words, when you join a streaming feed, you’re watching that feed live. If you join 20 minutes in, you’ll miss the first 20 minutes that has already played. The placeholder point is the point in time that’s currently being played from the media.

What about broadcasting? Is it the same? Yes, it is a form of streaming that is used during app-casting and screen casting. So, if you join a live screen casting feed, you won’t get to see what has been in the past, you only get to see the point forward from when you joined the stream already in progress.

Streaming also uses buffering to support its actions. That means that during the streaming process, the application buffers up a bunch of content into memory (the fastest type of storage possible) so that it can grab the next chunk rapidly and send it to the streaming service for smooth continuous playback. Buffering is used to avoid access to slow devices like hard drives and other storage devices which may impair smooth playback. Because of buffering, there may be a delay in what your screen shows versus what the person watching sees.

Streaming encodes the content to a streaming format at broadcast time. It is also decoded by the client at the during streaming. Therefore, the endpoint client viewer may choose to reduce the resolution of the content to improve streaming performance. For this reason, this is why if you’re watching Netflix or Amazon, the resolution may drop to less than HD. However, if you’re watching content across a local network at home, this should never be a problem (unless your network or WiFi is just really crappy).

Note, I will use the word stream and cast interchangeably to mean the same thing within this article.

Screen Casting (i.e., Screen Mirroring)

Screen casting is broadcasting the screen of your device itself. For example, if you want to broadcast the screen of your MacBook or your Android tablet, it will broadcast at whatever resolution your screen is currently running. If your resolution is 1920×1080, then it will stream your screen at HD resolution. If your screen’s resolution is less than this, it will stream the content at less than HD. If your screen resolution is more than this, it will stream at that resolution. Though, with some streaming software, you can set a top end resolution and encoder to prevent sending out too much data.

Because screen casting or mirroring only casts in the resolution of your screen, this is not optimal for streaming movies (unless your movie is 1080p and matches your screen’s resolution). If your screen runs at a lower resolution than the content, it is not optimal for watching moves. If you want to watch UltraHD movies, this is also not possible in most cases (unless your PC has an extremely advanced graphics card).

For many mobile devices and because screen resolutions vary, it’s likely your screen resolution is far less than the content you want to watch. For this reason, app developers have created App-casting.

App-casting

What exactly is app-casting? App-casting distances itself from the screen resolution by streaming the content at the content’s resolution. App-casting is when you use AppleTV or Chromecast to stream content from an app-cast enabled application on your computer or mobile device. Because the content dictates the resolution, there are no pesky screen resolution problems to get in the way. This means content streamed through applications can present their content at full native resolutions.

For Netflix, ABC TV, NBC TV, Hulu and Amazon, this means you’ll be watching those movies and TV shows in glorious full 1080p resolution (or whatever the app-casting receiver supports and also based on the content). For example today, AppleTV and Chromecast only support up to HD resolution (i.e., 1080p). In the future, we may see UltraHD versions of AppleTV and Chromecast become available. However, for now, we’re limited to HD with these devices.

Though, once an UltraHD version of AppleTV and Chromecast arrive, it also means that streaming to these devices means heftier bandwidth requirements. So, your home network might be fine for 1080p content casting, UltraHD content streaming may not run quite as well without better bandwidth. To stream UltraHD 4k content, you may have to upgrade your wireless network.

Note that Google has recently announced an UltraHD 4k Chromecast will be available in November 2016.

Chromecast and AppleTV

These are the two leading app-streaming devices on the market. AppleTV supports iOS app streaming and Chromecast supports Android OS streaming. While these are commonly used and sold for this purpose, they are by no means the only software or hardware solutions on the market.

For example, DLNA / UPnP is common for streaming to TVs, Xbox One and PS4. This type of streaming can be found in apps available on both iOS and Android (as well as MacOS, Linux and Windows). When streaming content from a DLNA compatible app, you don’t need to have a special receiver like AppleTV or Chromecast. Many smart TVs today support DLNA streaming right out of the box. To use DLNA, your media device needs to present a list of items available. After selection, DLNA will begin streaming to your TV or other device that supports DLNA. For example, Vizio TVs offer a Multimedia app from the Via menu to start DLNA search for media servers.

Note that you do not have to buy an AppleTV or Chromecast to stream your tablet, desktop or other device. There are free and paid DLNA, Twitch and YouTube streaming apps. You can stream both your display and possibly even your apps using third party apps. You’ll need to search for DLNA streaming app in whichever app store is associated with your device.

DLNA stands for Digital Living Network Alliance. It is an organization that advocates for content streaming around the home.

App-casting compatibility

To cast from an application on any specific operating system to devices like Chromecast or AppleTV, the app must support this remote display protocol. Not all apps support it, though Apple and Google built apps do. Third party applications must build their software to support these external displays. If the app doesn’t support it, you won’t see the necessary icon to begin streaming.

For example, to stream on iOS, a specific icon appears to let you know that an Apple TV is available. For Android, a similar icon also appears if a Chromecast is available. If you don’t see the streaming icon on your application, it means that your application does not support streaming to a remote display. You will need to ask the developer of that software to support it.

There are also third party casting apps that support streaming video data to remote displays or remote services like Twitch or YouTube. You don’t necessarily need to buy an AppleTV or Chromecast to stream your display.

Third Party Streaming Apps

For computers or mobile devices, there are a number of streaming apps available. Some require special setups, some support Twitch or YouTube and others support DLNA / UPnP. If you’re looking to stream content to the Internet, then you’ll want to pick one up that supports Twitch or YouTube. If you’re wanting to stream your data just to your local network, you’ll want to find one that supports DLNA.

You’ll just need to search through the appropriate app store to find the software you need. Just search for DLNA streaming and you’ll find a number apps that support this protocol. Note that apps that don’t require the use of Chromecast or AppleTV may tend to be less robust at streaming. This means they may crash or otherwise not work as expected. Using AppleTV or Chromecast may be your best alternative if you need to rely on having perfect streaming for a project or presentation.

Basically, for stability and usability, I recommend using an AppleTV or Chromecast. But, there are other software products that may work.

How to stop Mac dock icon bouncing

Posted in Apple, botch, computers by commorancy on September 28, 2015

AppleWhen an application starts up in MacOS X Yosemite, it bounces the application dock icon a few times, then stops bouncing once the application has started. For me, this is perfectly fine because at least there’s a positive response. Positive response is never a bad thing in operating system design.

Unfortunately, Apple decided to overloaded this same bouncing behavior for notifications to get your attention by bouncing a dock icon. For me, this is definitely not wanted. Not only is it extremely annoying, it never stops until you go touch that icon. It also performs this bouncing way too frequently. There are much better ways to get user attention than by bouncing the dock icon. Thankfully, there’s a way to stop this annoying and unwanted UI behavior. Let’s explore.

Defaults Database

Apple has what’s known as the user defaults database. It is a database of settings not unlike the old UNIX .files system, but much more extended. Unfortunately, most developers don’t document which settings can go into the defaults database and many of the settings may be hidden. However, you can easily find them by reading the values by opening terminal.app and then typing:

$ defaults read com.apple.dock | more

This command will spew out a lot of stuff, so you’ll want to pipe it to more to page through it. Each app has its own namespace similar in format to com.apple.dock that you can review. Not all apps support changing settings this way. For other apps, simply replace com.apple.dock with the appropriate application namespace and you can read up the settings for that application. If you decide to change any of the values, you may have to kill and restart the application or log out and log back in.

In short, there is a way to stop the bouncing using the defaults command. To do this, you will need to update the defaults database for com.apple.dock with the correct setting to stop it.

Stop the Bouncing
BounceIconTo stop the bouncing of dock icons, open a terminal shell and at a command prompt, type the following:

$ defaults write com.apple.dock no-bouncing -bool TRUE
$ killall Dock

Keep in mind that this is a global setting. This stops the dock icon bouncing for every application on your system for all notifications. The launch icon bouncing is not controlled by this setting. For that, you should visit the preferences area.

You can always reenable the bouncing at any time by opening terminal and then typing:

$ defaults write com.apple.dock no-bouncing -bool FALSE
$ killall Dock

Note that the defaults database is stored locally in each user account. So, if you log into several different accounts on your Mac, you’ll need to do this for each of your accounts.

Please leave me a comment below if this doesn’t work for you.

Flickr’s new interface review: Is it time to leave Flickr?

Posted in botch, cloud computing, computers, social media by commorancy on May 21, 2013

New Flickr InterfaceYahoo’s Flickr has just introduced their new ’tile’ interface (not unlike Windows Metro tiles) as the new user interface experience. Unfortunately, it appears that Yahoo introduced this site without any kind of preview, beta test or user feedback. Let’s explore.

Tile User Experience

The tiles interface at first may appear enticing. But, you quickly realize just how busy, cluttered, cumbersome and ugly this new interface is when you actually try to navigate and use it. The interface is very distracting and, again, overly busy. Note, it’s not just the tiles that are the problem. When you click an image from the tile sheet, it takes you to this huge black background with the image on top. Then you have to scroll and scroll to get to the comments.  No, not exactly how I want my images showcased. Anyway, let me start by saying that I’m not a fan of these odd shaped square tile interfaces (that look like a bad copycat of a Mondrian painting). The interface has been common on the Xbox 360 for quite some time and is now standard for Windows Metro interface. While I’ll tolerate it on the Xbox as a UI, it’s not an enticing user experience. It’s frustrating and, more than that, it’s ugly. So, why exactly Yahoo decided on this user interface as their core experience, I am completely at a loss…. unless this is some bid to bring back the Microsoft deal they tossed out several years back. I digress.

Visitor experience

While I’m okay with the tiles being the primary visitor experience, I don’t want this interface as my primary account owner experience. Instead, there should be two separate and distinct interfaces. An experience for visitors and an experience for the account owner.  The tile experience is fine for visitors, but keep in mind that this is a photo and art sharing site.  So, I should be able to display my images in the way I want my users to see them.  If I want them framed in black, let me do that. If I want them framed in white, let me do that. Don’t force me into a one-size-fits-all mold with no customization. That’s where we are right now.

Account owner experience

As a Flickr account owner, I want an experience that helps me manage my images, my sets, my collections and most of all, the comments and statistics about my images. The tile experience gives me none of this. It may seem ‘pretty’ (ahem, pretty ugly), but it’s not at all conducive to managing the images. Yes, I can hear the argument that there is the ‘organizr’ that you can use. Yes, but that’s of limited functionality. I preferred the view where I can see view numbers at a glance, if someone’s favorited a photo, if there are any comments, etc.  I don’t want to have to dig down into each photo to go find this information, I want this part at a glance.  Hence, the need for an account owner interface experience that’s separate from what visitors see.

Customization

This is a photo sharing site. These are my photos. Let me design my user interface experience to match the way I want my photos to be viewed. It is a gallery after all. If I were to show my work at a gallery, I would be able to choose the frames, the wall placement, the lighting and all other aspects about how my work is shown. Why not Flickr? This is what Flickr needs to provide. Don’t force us into a one-size-fits-all mold of something that is not only hideous to view, it’s slow to load and impossible to easily navigate.  No, give me a site where I can frame my work on the site. Give me a site where I can design a virtual lighting concept.  Give me a site where I can add virtual frames. Let me customize each and every image’s experience that best shows off my work.

Don’t corner me into a single user experience where I have no control over look and feel. If I don’t like the tile experience, let me choose from other options. This is what Flickr should have been designing.

No Beta Test?

Any site that rolls out a change as substantial as what Flickr has just pushed usually offers a preview window.  A period of time where users can preview the new interface and give feedback. This does two things:

  1. Gives users a way to see what’s coming.
  2. Gives the site owner a way to tweak the experience based on feedback before rolling it out.

Flickr didn’t do this. It is huge mistake to think that users will just silently accept any interface some random designer throws out there. The site is as much the users as it is Yahoo’s. It’s a community effort. Yahoo provides us with the tools to present our photos, we provide the photos to enhance their site. Yahoo doesn’t get this concept. Instead, they have become jaded to this and feel that they can do whatever they want and users will ‘have’ to accept it. This is a grave mistake for any web sharing site, least of all Flickr. Flickr, stop, look and listen. Now is the time.

Photo Sharing Sites

In among Flickr, there are many many photo sharing sites on the Internet. Flickr is not the only one. As content providers, we can simply take our photos and move them elsewhere. Yahoo doesn’t get this concept. They think they have some kind of captive audience. Unfortunately, this thinking is why Yahoo’s stock is now at $28 a share and not $280 a share. We can move our photos to a place where there’s a better experience (i.e., Picasa, DeviantArt, Photobucket, 500px, etc). Yahoo needs to wake up and realize they are not the only photo sharing site on the planet.

Old Site Back?

No, I’m not advocating to move back to the old site. I do want a new user experience with Flickr. Just not this one. I want an experience that works for my needs. I want an interface that let’s me showcase my images in the way I want. I want a virtual gallery that lets me customize how my images are viewed and not by using those hideous and slow tiles.  Why not take a page from the WordPress handbook and support gallery themes. Let me choose a theme (or design my own) that lets me choose how to best represent my imagery. This is the user experience that I want. This is the user experience I want my visitors to have. These are my images, let me show them in their best light.

Suggestions for @Yahoo/@Flickr

Reimagine. Rethink. Redesign. I’m glad to see that Yahoo is trying new things. But, the designers need to be willing to admit when a new idea is a failure and redesign it until it does work. Don’t stop coming up with new ideas. Don’t think that this is the way it is and there is nothing more. If Yahoo stops at this point with the interface as it is now, the site is dead and very likely with it Yahoo. Yahoo is very nearly on its last legs anyway. Making such a huge blunder with such a well respected (albeit antiquated site) could well be the last thing Yahoo ever does.

Marissa, have your engineers take this back to the drawing board and give us a site that we can actually use and that we actually want to use.

Tagged with: , , , , , , , ,

iPhone Risk: Your Employer and Personal Devices

Posted in best practices, cloud computing, computers, data security, Employment by commorancy on May 5, 2013

So, you go to work every day with your iPhone, Android phone or even an iPod. You bring it with you because you like having the convenience of people being able to reach you or because you listen to music. Let’s get started so you can understand your risks.

Employment Agreements

We all know these agreements. We typically sign one whenever we start a new job. Employers want to make sure that each employee remains responsible all during employment and some even require that employee to remain responsible even after leaving the company for a specified (or sometimes unspecified) period of time.  That is, these agreements make you, as an employee, personally responsible for not sharing things that shouldn’t be shared. Did you realize that many of these agreements extend to anything on your person and can include your iPhone, iPod, Android Phone, Blackberry or any other personal electronic device that you carry onto the property? Thus, the Employment Agreement may allow your employer to seize these devices to determine if they contain any data they shouldn’t contain.

You should always take the time to read these agreements carefully and thoroughly. If you don’t or can’t decipher the legalese, you should take it to an attorney and pay the fee for them to review it before signing it.  You might be signing away too many of your own personal rights including anything you may be carrying on your person.

Your Personal Phone versus Your Employer

We carry our personal devices to our offices each and every day without really thinking about the consequences. The danger, though, is that many employers now allow you to load up personal email on your own personal iDevices. Doing this can especially leave your device at risk of legal seizure or forfeiture under certain conditions.  So, always read Employment Agreements carefully. Better, if your employer requires you to be available remotely, they should supply you with all of the devices you need to support that remote access. If that support means you need to be available by phone or text messaging, then they should supply you with a device that supports these requirements.

Cheap Employers and Expensive Devices

As anyone who has bought an iPhone or an Android phone can attest, these devices are not cheap. Because many people are buying these for their own personal use, employers have become jaded by this and leech into this freebie and allow employees to use their own devices for corporate communication purposes. This is called a subsidy. You are paying your cell phone bill and giving part of that usage to your employer, unless your employer is reimbursing you part or all of your plan rate.  If you are paying your own bill without reimbursement, but using the device to connect to your company’s network or to corporate email, your device is likely at high risk should there be a legal need to investigate the company for any wrong doing. This could leave your device at risk of being pulled from your grasp, potentially forever.

If you let the company reimburse part or all of your phone bill, especially on a post-paid plan, the company could seize your phone on termination as company property.  The reason, post-paid plans pay for the cost of the phone as part of your bill. If the company reimburses more than 50% of the phone cost as part of your bill, they could legally own the phone at the end of your employment. If the company doesn’t reimburse your plan, your employer could still seize your device if you put corporate communication on your phone because it then contains company property.

What should I do?

If the company requires that you work remotely or have access to company communication after hours, they need to provide you with a device that supports this access. If they are unwilling to provide you with a device, you should decline to use your personal device for that purpose. At least, you should decline unless the employment agreement specifically states that they can’t seize your personal electronics. Although, most employers likely won’t put a provision in that explicitly forbids them from taking your device. Once you bring your device on the property, your employer can claim that your device contains company property and seize it anyway. Note that even leaving it in your car could be enough if the company WiFi reaches your car in its parking space.

Buy a dumb phone and use that at work. By this I mean, buy a phone that doesn’t support WiFi, doesn’t support a data plan, doesn’t support email, doesn’t support bluetooth and that doesn’t support any storage that can be removed. If your phone is a dumb phone, it cannot be claimed that it could contain any company file data.  If it doesn’t support WiFi, it can’t be listening in on company secrets.  This dumb phone basically requires your company to buy you a smart phone if they need you to have remote access to email and always on Internet. It also prevents them from leeching off your personal iPhone plan.

That doesn’t mean you can’t have an iPhone, but you should leave it at home during work days. Bring your dumb phone to work. People can still call and text you, but the phone cannot be used as a storage vehicle for company secrets (unless you start entering corporate contacts into the phone book). You should avoid entering any company contact information in your personal phone’s address book. Even this information could be construed as confidential data and could be enough to have even your dumb phone seized.

If they do decide to seize your dumb phone, you’ve only lost a small amount of money in the phone and it’s simple to replace the SIM card in most devices. So, you can probably pick up a replacement phone and get it working the same day for under $100 (many times under $30).

Request to Strike Language from the Employment Agreement

Reading through your Employment Agreement can make or break the deal of whether or not you decide to hire on. Some Employment Agreements are way overreaching in their goals. Depending on how the management reacts to your request to strike language from the Employment Agreement may tell you the kind of company you are considering. In some cases, I’ve personally had language struck from the agreement and replaced with an addendum to which we both agreed and signed. In another case, I walked away from the position because both the hiring and HR managers refused to alter the Employment Agreement containing overreaching language. Depending on how badly they want to fill the position, you may or may not have bargaining power here. However, if it’s important to you, you should always ask. If they decline to amend the agreement, then you have to decide whether or not the position is important enough to justify signing the Agreement with that language still in place.

But, I like my iPhone/iPad/iPod too much

Then, you take your chances with your employer. Only you can judge your employer for their intent (and by reading your employment agreement).  When it comes down to brass tacks, your employer will do what’s right for the company, not for you. The bigger the company gets, the more likely they are to take your phone and not care about you or the situation. If you work in a 1000+ employee company, your phone seizure risk greatly increases.  This is especially true if you work in any position where you have may access to extremely sensitive company data.

If you really like your device, then you should protect it by leaving it someplace away from the office (and not in your car parked on company property). This will ensure they cannot seize it from you when you’re on company property. However, it won’t stop them from visiting your home and confiscating it from you there.

On the other hand, unlike the dumb phone example above, if they size your iPhone, you’re looking at a $200-500 expense to replace the phone plus the SIM card and possibly other expenses. If you have synced your iPhone with your computer at home and data resides there, that could leave your home computer at risk of seizure, especially if the Federal Government is involved. Also, because iCloud now stores backups of your iDevices, they could petition the court to seize your Apple ID from Apple to gain access to your iDevice backups.

For company issued iPhones, create a brand new Apple ID using your company email address. Have your company issued phone create its backups in your company created Apple ID. If they seize this Apple ID, there is no loss to you. You should always, whenever possible create separate IDs for company issued devices and for your personal devices. Never overlap this personal and company login IDs matter how tempting it may be. This includes doing such things as linking in your personal Facebook, Google, LinkedIn, Yahoo or any other personal site accounts to your corporate issued iPhone or Apps. If you take any personal photographs using your company phone, you should make sure to get them off of the phone quickly.  Better, don’t take personal pictures with your company phone. If you must sync your iPhone with a computer, make sure it is only a company computer. Never sync your company issued iPhone or iPad with your personally owned computer. Only sync your device with a company issued computer.

Personal Device Liabilities

Even if during an investigation nothing is turned up on your device related to the company’s investigation, if they find anything incriminating on your device (i.e., child porn, piracy or any other illegal things), you will be held liable for those things they find as a separate case. If something is turned up on your personal device related to the company’s investigation, it could be permanently seized and never returned.  So, you should be aware that if you carry any device onto your company’s premises, your device can become the company’s property.

Caution is Always Wise

With the use of smart phones comes unknown liabilities when used at your place of employment. You should always treat your employer and place of business as a professional relationship. Never feel that you are ‘safe’ because you know everyone there. That doesn’t matter when legal investigations begin. If a court wants to find out everything about a situation, that could include seizing anything they feel is relevant to the investigation. That could include your phone, your home computer, your accounts or anything else that may be relevant. Your Employment Agreement may also allow your employer to seize things that they need if they feel you have violated the terms of your employment. Your employer can also petition the court to require you to relinquish your devices to the court.

Now, that doesn’t mean you won’t get your devices, computers or accounts back. But, it could take months if the investigation drags on and on. To protect your belongings from this situation, here are some …

Tips

  • Read your Employment Agreement carefully
  • Ask to strike language from Agreements that you don’t agree with
  • Make sure agreements with companies eventually expire after you leave the company
  • NDAs should expire after 5-10 years after termination
  • Non-compete agreements should expire 1 year after termination
  • Bring devices to the office that you are willing to lose
  • Use cheap dumb phones (lessens your liability)
  • Leave memory sticks and other memory devices at home
  • Don’t use personal devices for company communication (i.e., email or texting)
  • Don’t let the company pay for your personal device bills (especially post-paid cell plans)
  • Prepaid plans are your friend at your office
  • Require your employer to supply and pay for iDevices to support your job function
  • Turn WiFi off on all personal devices and never connect them to corporate networks
  • Don’t connect personal phones to corporate email systems
  • Don’t text any co-workers about company business on personal devices
  • Ask Employees to refrain from texting your personal phone
  • Use a cheap mp3 player without WiFi or internet features when at the office
  • Turn your personal cell phone off when at work, if at all possible
  • Step outside the office building to make personal calls
  • Don’t use your personal Apple ID when setting up your corporate issued iPhone
  • Create a new separate Apple ID for corporate issued iPhones
  • Don’t link iPhone or Android apps to personal accounts (LinkedIn, Facebook, etc)
  • Don’t take personal photos with a company issued phone
  • Don’t sync company issued phones with your personally owned computer
  • Don’t sync personal phones with company owned computers
  • Replace your device after leaving employment of a company

Nothing can prevent your device from being confiscated under all conditions. But, you can help reduce this outcome by following these tips and by segregating your personal devices and accounts from your work devices and work accounts. Keeping your personal devices away from your company’s property is the only real way to help prevent it from being seized. But, the company could still seize it believing that it may contain something about the company simply because you were or are an employee. Using a dumb prepaid phone is probably the only way to ensure that on seizure, you can get a phone set up and your service back quickly and with the least expense involved. I should also point out that having your phone seized does not count as being stolen, so your insurance won’t pay to replace your phone for this event.

Windows 8 PC: No Linux?

Posted in botch, computers, linux, microsoft, redmond, windows by commorancy on August 5, 2012

According to the rumor mill, Windows 8 PC systems will come shipped with a new BIOS replacement using UEFI (the extension of the EFI standard).  This new replacement boot system apparently comes shipped with a secured booting system that, some say, will be locked to Windows 8 alone.   On the other hand, the Linux distributions are not entirely sure how the secure boot systems will be implemented.  Are Linux distributions being prematurely alarmist? Let’s explore.

What does this mean?

For Windows 8 users, probably not much.  Purchasing a new PC will be business as usual.  For Microsoft, and assuming UEFI secure boot cannot be disabled or reset, it means you can’t load another operating system on the hardware.  Think of locked and closed phones and you’ll get the idea.  For Linux, that would mean the end of Linux on PCs (at least, not unless Linux distributions jump thorough some secure booting hoops).  Ok, so that’s the grim view of this.  However, for Linux users, there will likely be other options.  That is, buying a PC that isn’t locked.  Or, alternatively, resetting the PC back to its factory default state of being unlocked (which the UEFI should support).

On the other hand, dual booting may no longer be an option with secure boot enabled.  That means, it may not be possible to install both Windows and Linux onto the system and choose to boot one or the other at boot time.  On other other hand, we do not know if Windows 8 requires UEFI secure boot to boot or whether it can be disabled.  So far it appears to be required, but if you buy a boxed retail edition of Windows 8 (which is not yet available), it may be possible to disable secure boot.  It may be that some of the released to manufacturing (OEM) editions require secure boot.  Some editions may not.

PC Manufacturers and Windows 8

The real question here, though, is what’s driving UEFI secure booting?  Is it Windows?  Is it the PC manufacturers?  Is it a consortium?  I’m not exactly sure.  Whatever the impetus is to move in this direction may lead Microsoft back down the antitrust path once again.  Excluding all other operating systems from PC hardware is a dangerous precedent as this has not been attempted on this hardware before.  Yes, with phones, iPads and other ‘closed’ devices, we accept this.  On PC hardware, we have not accepted this ‘closed’ nature because it has never been closed.  So, this is a dangerous game Microsoft is playing, once again.

Microsoft anti-trust suit renewed?

Microsoft should tread on this ground carefully.  Asking PC manufacturers to lock PCs to exclusively Windows 8 use is a lawsuit waiting to happen.  It’s just a matter of time before yet another class action lawsuit begins and, ultimately, turns into a DOJ antitrust suit.  You would think that Microsoft would have learned its lesson by its previous behaviors in the PC marketplace.  There is no reason that Windows needs to lock down the hardware in this way.

If every PC manufacturer begins producing PCs that preclude the loading of Linux or other UNIX distributions, this treads entirely too close to antitrust territory for Microsoft yet again.  If Linux is excluded from running on the majority of PCs, this is definitely not wanted behavior.  This rolls us back to the time when Microsoft used to lock down loading of Windows on the hardware over every other operating system on the market.  Except that last time, nothing stopped you from wiping the PC and loading Linux. You just had to pay the Microsoft tax to do it.  At that time, you couldn’t even buy a PC without Windows.  This time, according to reports, you cannot even load Linux with secure booting locked to Windows 8.  In fact, you can’t even load Windows 7 or Windows XP, either.  Using UEFI secure boot on Windows 8 PCs treads  within millimeters of this same collusionary behavior that Microsoft was called on many years back, and ultimately went to court over and lost much money on.

Microsoft needs to listen and tread carefully

Tread carefully, Microsoft.  Locking PCs to running only Windows 8 is as close as you can get to the antitrust suits you thought you were done with.  Unless PC manufacturers give ways of resetting and turning off the UEFI secure boot system to allow non-secure operating systems, Microsoft will once again be seen in collusion with PC manufacturers to exclude all other operating systems from UEFI secure boot PCs.  That is about as antitrust as you can get.

I’d fully expect to see Microsoft (and possibly some PC makers) in DOJ court over antitrust issues.  It’s not a matter of if, it’s a matter of when.  I predict by early 2014 another antitrust suit will have materialized, assuming the way that UEFI works comes true.  On the other hand, this issue is easily mitigated by UEFI PC makers allowing users to disable the UEFI secure boot to allow a BIOS boot and Linux to be loaded.  So, the antitrust suits will entirely hinge on how flexible the PC manufacturers set up the UEFI secure booting.  If both Microsoft and the PC makers have been smart about this change, UEFI booting can be disabled.   If not, we know the legal outcome.

Virtualization

For Windows 8, it’s likely that we’ll see more people moving to use Linux as their base OS with Windows 8 virtualized (except for gamers where direct hardware is required).  If Windows 8 is this locked down, then it’s better to lock down VirtualBox than the physical hardware.

Death Knell for Windows?

Note that should the UEFI secure boot system be as closed as predicted, this may be the final death knell for Windows and, ultimately, Microsoft.  The danger is in the UEFI secure boot system itself.  UEFI is new and untested in the mass market.  This means that not only is Windows 8 new (and we know how that goes bugwise), now we have an entirely new untested boot system in secure boot UEFI.  This means that if anything goes wrong in this secure booting system that Windows 8 simply won’t boot.  And believe me, I predict there will be many failures in the secure booting system itself.  The reason, we are still relying on mechanical hard drives that are highly prone to partial failures.  Even while solid state drives are better, they can also go bad.  So, whatever data the secure boot system relies on (i.e. decryption keys) will likely be stored somewhere on the hard drive.  If this sector of the hard drive fails, no more boot.  Worse, if this secure booting system requires an encrypted hard drive, that means no access to the data on the hard drive after failure ever.

I’d predict there will be many failures related to this new UEFI secure boot that will lead to dead PCs.  But, not only dead PCs, but PCs that offer no access to the data on the hard drives.  So people will lose everything on their computer.

As people realize this aspect of this local storage system on an extremely closed system, they will move toward cloud service devices to prevent data loss.  Once they realize the benefits of cloud storage, the appeal of storing things on local hard drives and most of the reasons to use Windows 8 will be lost.  Gamers may be able to keep the Windows market alive a bit longer, otherwise. On the other hand, this why a gaming company like Valve software is hedging its bets and releasing Linux versions of their games. For non-gamers, desktop and notebook PCs running Windows will be less and less needed and used.  In fact, I contend this is already happening.  Tablets and other cloud storage devices are already becoming the norm.  Perhaps not so much in the corporate world as yet, but once cloud based Office suites get better, all bets are off.  So, combining the already trending move towards limited storage cloud devices, closing down PC systems in this way is, at best, one more nail in Windows’ coffin.  At worst, Redmond is playing Taps for Windows.

Closing down the PC market in this way is not the answer.  Microsoft has stated it wants to be more innovative as Steve Balmer recently proclaimed.  Yet, I see moves like this and this proves that Microsoft has clearly not changed and has no innovation left.  Innovation doesn’t have to and shouldn’t lead to closed PC systems and antitrust lawsuits.

How to format NTFS on MacOS X

Posted in Apple, computers, Mac OS X, microsoft by commorancy on June 2, 2012

This article is designed to show you how to mount and manage NTFS partitions in MacOS X.  Note the prerequisites below as it’s not quite as straightforward as one would hope.  That is, there is no native MacOS X tool to accomplish this, but it can be done.  First things first:

Disclaimer

This article discusses commands that will format, destroy or otherwise wipe data from hard drives.  If you are uncomfortable working with commands like these, you shouldn’t attempt to follow this article.  This information is provided as-is and all risk is incurred solely by the reader.  If you wipe your data accidentally by the use of the information contained in this article, you solely accept all risk.  This author accepts no liability for the use or misuse of the commands explored in this article.

Prerequisites

Right up front I’m going to say that to accomplish this task, you must have the following prerequisites set up:

  1. VirtualBox installed (free)
  2. Windows 7 (any flavor) installed in VirtualBox (you can probably use Windows XP, but the commands may be different) (Windows is not free)

For reading / writing to NTFS formatted partitions (optional), you will need one of the following:

  1. For writing to NTFS partitions on MacOS X:
  2. For reading from NTFS, MacOS X can natively mount and read from NTFS partitions in read-only mode. This is built into Mac OS X.

If you plan on writing to NTFS partitions, I highly recommend Tuxera over ntfs-3g. Tuxera is stable and I’ve had no troubles with it corrupting NTFS volumes which would require a ‘chkdsk’ operation to fix.  On the other hand, ntfs-3g regularly corrupts volumes and will require chkdsk to clean up the volume periodically. Do not override MacOS X’s native NTFS mounter and have it write to volumes (even though it is possible).  The MacOS X native NTFS mounter will corrupt disks in write mode.  Use Tuxera or ntfs-3g instead.

Why NTFS on Mac OS X?

If you’re like me, I have a Mac at work and Windows at home.  Because Mac can mount NTFS, but Windows has no hope of mounting MacOS Journaled filesystems, I opted to use NTFS as my disk carry standard.  Note, I use large 1-2TB sized hard drives and NTFS is much more efficient with space allocation than FAT32 for these sized disks.  So, this is why I use NTFS as my carry around standard for both Windows and Mac.

How to format a new hard drive with NTFS on Mac OS X

Once you have Windows 7 installed in VirtualBox and working, shut it down for the moment.  Note, I will assume that you know how to install Windows 7 in VirtualBox.  If not, let me know and I can write a separate article on how to do this.

Now, go to Mac OS X and open a command terminal (/Applications/Utilities/Terminal.app).  Connect the disk to your Mac via USB or whatever method you wish the drive to connect.  Once you have it connected, you will need to determine which /dev/diskX device it is using.  There are several ways of doing this.  However, the easiest way is with the ‘diskutil’ command:

$ diskutil list
/dev/disk0
   #: TYPE NAME SIZE IDENTIFIER
   0: GUID_partition_scheme *500.1 GB disk0
   1: EFI 209.7 MB disk0s1
   2: Apple_HFS Macintosh HD 499.8 GB disk0s2
/dev/disk1
   #: TYPE NAME SIZE IDENTIFIER
   0: GUID_partition_scheme *2.0 TB disk1
/dev/disk2
   #: TYPE NAME SIZE IDENTIFIER
   0: Apple_partition_scheme *119.6 MB disk2
   1: Apple_partition_map 32.3 KB disk2s1
   2: Apple_HFS VirtualBox 119.5 MB disk2s2

 
Locate the drive that appears to be the size of your new hard drive.  If the hard drive is blank (a brand new drive), it shouldn’t show any additional partitions. In my case, I’ve identified that I want to use /dev/disk1.  Remember this device file path because you will need it for creating the raw disk vmdk file. Note the nomenclature above:  The /dev/disk1 is the device to access the entire drive from sector 0 to the very end.  The /dev/diskXsX files access individual partitions created on the device.  Make sure you’ve noted the correct /dev/disk here or you could overwrite the wrong drive.

Don’t create any partitions with MacOS X in Disk Utility or in diskutil as these won’t be used (or useful) in Windows.  In fact, if you create any partitions with Disk Utility, you will need to ‘clean’ the drive in Windows.

Creating a raw disk vmdk for VirtualBox

This next part will create a raw connector between VirtualBox and your physical drive.  This will allow Windows to directly access the entire physical /dev/disk1 drive from within VirtualBox Windows.  Giving Windows access to the entire drive will let you manage the entire drive from within Windows including creating partitions and formatting them.

To create the connector, you will use the following command in Mac OS X from a terminal shell:

$ vboxmanage internalcommands createrawvmdk \
-filename "/path/to/VirtualBox VMs/Windows/disk1.vmdk" -rawdisk /dev/disk1

 
It’s a good idea to create the disk1.vmdk where your Windows VirtualBox VM lives. Note, if vboxmanage isn’t in your PATH, you will need to add it to your PATH to execute this command or, alternatively, specify the exact path to the vboxmanage command. In my case, this is located in /usr/bin/vboxmanage.  This command will create a file named disk1.vmdk that will be used inside your Windows VirtualBox machine to access the hard drive. Note that creating the vmdk doesn’t connect the drive to your VirtualBox Windows system. That’s the next step.  Make note of the path to disk1.vmdk as you will also need this for the next step.

Additional notes, if the drive already has any partitions on it (NTFS or MacOS), you will need to unmount any mounted partitions before Windows can access it and before you can createrawvmdk with vboxmanage.  Check ‘df’ to see if any partitions on drive are mounted.  To unmount, either drop the partition(s) on the trashcan, use umount /path/to/partition or use diskutil unmount /path/to/partition.  You will need to unmount all partitions on the drive in question before Windows or vboxmanage can access it.  Even one mounted partition will prevent VirtualBox from gaining access to the disk.

Note, if this is a brand new drive, it should be blank and it won’t attempt to mount anything.  MacOS may ask you to format it, but just click ‘ignore’.  Don’t have MacOS X format the drive.  However, if you are re-using a previously used drive and wanting to format over what’s on it, I would suggest you zero the drive (see ‘Zeroing a drive’ below) as the fastest way to clear the drive of partition information.

Hooking up the raw disk vmdk to VirtualBox

Open VirtualBox.  In VirtualBox, highlight your Windows virtual machine and click the ‘Settings’ cog at the top.

  • Click the Storage icon.
  • Click the ‘SATA Controller’
  • Click on the ‘Add Hard Disk’ icon (3 disks stacked).
  • When the ? panel appears, click on ‘Choose existing disk’.
  • Navigate to the folder where you created ‘disk1.vmdk’, select it and click ‘Open’.
  • The disk1.vmdk connector will now appear under SATA Controller

You are ready to launch VirtualBox.  Note, if /dev/disk1 isn’t owned by your user account, VirtualBox may fail to open this drive and show an error panel.  If you see any error panels, check to make sure no partitions are mounted and  then check the permissions of /dev/disk1 with ls -l /dev/disk1 and, if necessary, chown $LOGNAME /dev/disk1.  The drive must not have any partitions actively mounted and /dev/disk1 must be owned by your user account on MacOS X.  Also make sure that the vmdk file you created above is owned by your user account as you may need to become root to createrawvmdk.

Launching VirtualBox

Click the ‘Start’ button to start your Windows VirtualBox.  Once you’re at the Windows login panel, log into Windows as you normally would.  Note, if the hard drive goes to sleep, you may have to wait for it to wake up for Windows to finish loading.

Once inside Windows, do the following:

  • Start->All Programs->Accessories->Command Prompt
  • Type in ‘diskpart’
  • At the DISKPART> prompt, type ‘list disk’ and look for the drive (based on the size of the drive).
    • Note, if you have more than one drive that’s the same exact size, you’ll want to be extra careful when changing things as you could overwrite the wrong drive.  If this is the case, follow these next steps at your own risk!
DISKPART> list disk
Disk ### Status Size Free Dyn Gpt
 -------- ------------- ------- ------- --- ---
 Disk 0 Online 40 GB 0 B
 Disk 1 Online 1863 GB 0 B *
  • In my case, I am using Disk 1.  So, type in ‘select disk 1’.  It will say ‘Disk 1 is now the selected disk.’
    • From here on down, use these commands at your own risk.  They are destructive commands and will wipe the drive and data from the drive.  If you are uncertain about what’s on the drive or you need to keep a copy, you should stop here and backup the data before proceeding.  You have been warned.
    • Note, ‘Disk 1’ is coincidentally named the same as /dev/disk1 on the Mac.  It may not always follow the same naming scheme on all systems.
  • To ensure the drive is fully blank type in ‘clean’ and press enter.
    • The clean command will wipe all partitions and volumes from the drive and make the drive ‘blank’.
    • From here, you can repartition the drive as necessary.

Creating a partition, formatting and mounting the drive in Windows

  • Using diskpart, here are the commands to create one partition using the whole drive, format it NTFS and mount it as G: (see commands below):
DISKPART> select disk 1
Disk 1 is now the selected disk
DISKPART> clean
DiskPart succeeded in cleaning the disk.
DISKPART> create partition primary
DiskPart succeeded in creating the specified partition.
DISKPART> list partition
Partition ### Type Size Offset
 ------------- ---------------- ------- -------
* Partition 1 Primary 1863 GB 1024 KB
DISKPART> select partition 1
Partition 1 is now the selected partition.
DISKPART> format fs=ntfs label="Data" quick
100 percent completed
DiskPart successfully formatted the volume.
DISKPART> assign letter=g
DiskPart successfully assigned the drive letter or mount point.
DISKPART> exit
Leaving DiskPart...

 

  • The drive is now formatted as NTFS and mounted as G:.  You should see the drive in Windows Explorer.
    • Note, unless you want to spend hours formatting a 1-2TB sized drive, you should format it as QUICK.
    • If you want to validate the drive is good, then you may want to do a full format on the drive.  New drives are generally good already, so QUICK is a much better option to get the drive formatted faster.
  • If you want to review the drive in Disk Management Console, in the command shell type in diskmgmt.msc
  • When the window opens, you should find your Data drive listed as ‘Disk 1’

Note, the reason to use ‘diskpart’ over Disk Management Console is that you can’t use ‘clean’ in Disk Management Console, this command is only available in the diskpart tool and it’s the only way to completely clean the drive of all partitions to make the drive blank again.  This is especially handy if you happen to have previously formatted the drive with MacOS X Journaled FS and there’s an EFI partition on the drive.  The only way to get rid of a Mac EFI partition is to ‘clean’ the drive as above.

Annoyances and Caveats

MacOS X always tries to mount recognizable removable (USB) partitions when they become available.  So, as soon as you have formatted the drive and have shut down Windows, Mac will likely mount the NTFS drive under /Volumes/Data.  You can check this with ‘df’ in Mac terminal or by opening Finder.  If you find that it is mounted in Mac, you must unmount it before you can start VirtualBox to use the drive in Windows.  If you try to start VirtualBox with a mounted partition in Mac OS X, you will see a red error panel in VirtualBox.  Mac and Windows will not share a physical volume.  So you must make sure MacOS X has unmounted the volume before you start VirtualBox with the disk1.vmdk physical drive.

Also, the raw vmdk drive is specific to that single hard drive.  You will need to go through the steps of creating a new raw vmdk for each new hard drive you want to format in Windows unless you know for certain that each hard drive is truly identical.  The reason is that vboxmanage discovers the geometry of the drive and writes it to the vmdk.  So, each raw vmdk is tailored to each drive’s size and geometry.  It is recommended that you not try to reuse an existing physical vmdk with another drive.  Always create a new raw vmdk for each drive you wish to manage in Windows.

Zeroing a drive

While the clean command clears off all partition information in Windows, you can also clean off the drive in MacOS X.  The way to do this is by using dd.  Again, this command is destructive, so be sure you know which drive you are operating on before you press enter.  Once you press enter, the drive will be wiped of data.  Use this section at your own risk.

To clean the drive use the following:

$ dd if=/dev/zero of=/dev/disk1 bs=4096 count=10000

 
This command will write 10000 * 4096 byte blocks with all zeros.  This should overwrite any partition information and clear the drive off.  You may not need to do this as the diskpart ‘clean’ command may be sufficient.

Using chkdsk

If the drive has become corrupted or is acting in a way you think may be a problem, you can always go back into Windows with the data1.vmdk connector and run chkdsk on the volume.  You can also use this on any NTFS or FAT32 volume you may have.  You will just need to create a physical vmdk connector and attach it to your Windows SATA controller and make sure MacOS X doesn’t have it mounted. Then, launch VirtualBox and clean it up.

Tuxera

If you are using Tuxera to mount NTFS, once you exit out of Windows with your freshly formatted NTFS volume, Tuxera should immediately see the volume and mount it.  This will show you that NTFS has been formatted properly on the drive.  You can now read and write to this volume as necessary.

Note that this method to format a drive with NTFS is the safest way on Mac OS X.  While there may be some native tools floating around out there, using Windows to format NTFS will ensure the volume is 100% compliant with NTFS and Windows.  Using third party tools not written by Microsoft could lead to data corruption or improperly formatted volumes.

Of course, you could always connect the drive directly to a Windows system and format it that way. 😉

Tagged with: , ,

How not to run a business (Part 3) — SaaS edition

Posted in business, cloud computing, computers by commorancy on May 8, 2012

So, we’ve talked about how not to run a general business, let’s get to some specifics.  Since software as a service (SaaS) is now becoming more and more common, let’s explore software companies and how not to run these.

Don’t add new features because you can

If a customer is asking for something new, then add that new feature at some appointed future time.  Do not, however, think that that feature needs to be implemented tomorrow.  On the other hand, if you have conceived something that you think might be useful, do not spend time implementing it until someone is actually asking for it.  This is an important lesson to learn.  It’s a waste of time to write code that no one will actually use.  So, if you think your feature has some merit, invite your existing customers to a discussion by asking them if they would find the proposed feature useful.  Your customers have the the final say. If the majority of your customers don’t think they would use it, scrap the idea.  Time spent writing a useless feature is time wasted.  Once written, the code has to be maintained by someone and is an additional waste of time.

Don’t tie yourself to your existing code

Another lesson to learn is that your code (and app) needs to be both flexible and trashable.  Yes, I said trashable.  You need to be willing to throw away code and rewrite it if necessary. That means, code flows, changes and morphs.  It does not stay static.  Ideas change, features change, hardware changes, data changes and customer expectations change.  As your product matures and requires more and better infrastructure support, you will find that your older code becomes outdated.  Don’t be surprised if you find yourself trashing much of your existing code for completely new implementations taking advantage of newer technologies and frameworks.  Code that you may have written from scratch to solve an early business problem may now have a software framework that, while not identical to your code, will do what your code does 100x more efficiently. You have to be willing to dump old code for new implementations and be willing to implement those ideas in place of old code.  As an example, usually early code does not take high availability into account.  Therefore, gutting old code that isn’t highly available for new frameworks that are is always a benefit to your customers.  If there’s anything to understand here, code is not a pet to get attached to.  It provides your business with a point in time service set.  However, that code set must grow with your customer’s expectations. Yes, this includes total ground-up rewrites.

Don’t write code that focuses solely on user experience

In software-as-a-service companies, many early designs can focus solely on what the code brings to the table for customer experience.  The problem is that the design team can become so focused on writing the customer experience that they forget all about the manageability of the code from an operational perspective.  Don’t write your code this way. Your company’s ability to support that user experience will suffer greatly from this mistake. Operationally, the code must be manageable, supportable, functional and must also start up, pause and stop consistently.  This means, don’t write code so that when it fails it leaves garbage in tables, half-completed transactions with no way to restart the failed transactions or huge temporary files in /tmp.  This is sloppy code design at best.  At worst, it’s garbage code that needs to be rewritten.

All software designs should plan for both the user experience and the operational functionality.  You can’t expect your operations team to become the engineering code janitors. Operations teams are not janitors for cleaning up after sloppy code that leaves garbage everywhere.  Which leads to …

Don’t write code that doesn’t clean up after itself

If your code writes temporary tables or otherwise uses temporary mechanisms to complete its processing, clean this up not only on a clean exit, but also during failure conditions.  I know of no languages or code that, when written correctly, cannot cleanup after itself even under the most severe software failure conditions.  Learn to use these mechanisms to clean up.  Better, don’t write code that leaves lots of garbage behind at any point in time.  Consume what you need in small blocks and limit the damage under failure conditions.

Additionally, if your code needs to run through processing a series of steps, checkpoint those steps.  That means, save the checkpoint somewhere.  So, if you fail to process step 3 of 5, another process can come along and continue at step 3 and move forward.  Leaving half completed transactions leaves your customers open to user experience problems.  Always make sure your code can restart after a failure at the last checkpoint.  Remember, user experience isn’t limited to a web interface…

Don’t think that the front end is all there is to user experience

One of the mistakes that a lot of design teams fall into is thinking that the user experience is tied to the way the front end interacts.  Unfortunately, this design approach has failure written all over it.  Operationally, the back end processing is as much a user experience as the front end interface.  Sure, the interface is what the user sees and how the user interacts with your company’s service.  At the same time, what the user does on the front end directly drives what happens on the back end.  Seeing as your service is likely to be multiuser capable, what each user does needs to have its own separate allocation of resources on the back end to complete their requests.  Designing the back end process to serially manage the user requests will lead to backups when you have 100, 1,000 or 10,000 users online.

It’s important to design both the front end experience and the back end processing to support a fully scalable multiuser experience.  Most operating systems today are fully capable of multitasking utilizing both multiprocess and multithreaded support.  So, take advantage of these features and run your user’s processing requests concurrently, not serially.  Even better, make sure they can scale properly.

Don’t write code that sets no limits

One of the most damaging things you can do for user experience is tell your customers there are no limits in your application.  As soon as those words are uttered from your lips, someone will be on your system testing that statement.  First by seeing how much data it takes before the system breaks, then by stating that you are lying.  Bad from all aspects.  The takeaway here is that all systems have limits such as disk capacity, disk throughput, network throughput, network latency, the Internet itself is problematic, database limits, process limits, etc.  There are limits everywhere in every operating system, every network and every application.  You can’t state that your application gives unlimited capabilities without that being a lie.  Eventually, your customers will hit a limit and you’ll be standing there scratching your head.

No, it’s far simpler not to make this statement.  Set quotas, set limits, set expectations that data sets perform best when they remain between a range.  Customers are actually much happier when you give them realistic limits and set their expectations appropriately.  Far fetched statements leave your company open to problems.  Don’t do this.

Don’t rely on cron to run your business

Ok, so I know some people will say, why not?  Cron, while a decent scheduling system, isn’t without its own share of problems.  One of its biggest problems, however, is that its smallest level of granularity is once per minute.  If you need something to run more frequently than every minute, you are out of luck with cron.  Cron also requires hard coded scripts that must be submitted in specific directories for cron to function.  Cron doesn’t have an API.  Cron supports no external statistics other than by digging through log files.  Note, I’m not hating on cron.  Cron is a great system administration tool. It has a lot of great things going for it with systems administration use when utilizing relatively infrequent tasks.  It’s just not designed to be used under heavy mission critical load. If you’re doing distributed processing, you will need to find a way to launch in a more decentralized way anyway.  So, cron likely won’t work in a distributed environment.  Cron also has a propensity to stop working internally, but leave itself running in the process list.  So, monitoring systems will think it’s working when it’s not actually launching any tasks.

If you’re a Windows shop, don’t rely on Windows scheduler to run your business.  Why?  Windows scheduler is actually a component of Internet Explorer (IE).  When IE changes, the entire system could stop or fail.  Considering the frequency with which Microsoft releases updates to not only the operating system, but to IE, you’d be wise to find another scheduler that is not likely to be impacted by Microsoft’s incessant need to modify the operating system.

Find or design a more reliable scheduler that works in a scalable fault tolerant way.

Don’t rely on monitoring systems (or your operations team) to find every problem or find the problem timely

Monitoring systems are designed by humans to find problems and alert.  Monitoring systems are by their very nature, reactive.  This means that monitoring systems only alert you AFTER they have found a problem.  Never before. Worse, most monitoring systems only alert of problems after multiple checks have failed.  This means that not only is the service down, it’s been down for probably 15-20 minutes by the time the system alerts.  In this time, your customers may or may not have already seen that something is going on.

Additionally, for any monitoring for a given application feature, the monitoring system needs a window into that specific feature.  For example, monitoring Windows WMI components or Windows message queues from a Linux monitoring system is near impossible.  Linux has no components at all to access, for example, the Windows WMI system or Windows message queues.  That said, a third party monitoring system with an agent process on the Windows system may be able to access WMI, but it may not.

Always design your code to provide a window into critical application components and functionality for monitoring purposes. Without such a monitoring window, these applications can be next to impossible to monitor.  Better, design using standardized components that work across all platforms instead of relying on platform specific components.  Either that or choose a single platform for your business environment and stick with that choice.  Note that it is not the responsibility of the operations team to find windows to monitor.  It’s the application engineering team’s responsibility to provide the necessary windows into the application to monitor the application.

Don’t expect your operations team to debug your application’s code

Systems administrators are generally not programmers.  Yes, they can write shell scripts, but they don’t write code. If your application is written in PHP or C or C++ or Java, don’t expect your operations team to review your application’s code, debug the code or even understand it.  Yes, they may be able to review some Java or PHP, but their job is not to write or review your application’s code. Systems administrators are tasked to manage the operating systems and components.  That is, to make sure the hardware and operating system is healthy for the application to function and thrive.  Systems administrators are therefore not tasked to write or debug your application’s code.  Debugging the application is the task for your software engineers.  Yes, a systems administrator can find bugs and report them, just as anyone can.  Determining why that bug exists is your software engineers’ responsibility.  If you expect your systems administrators to understand your application’s code in that level of detail, they are no longer systems administrators and they are considered software engineers.  Keeping job roles separate is important in keeping your staff from becoming overloaded with unnecessary tasks.

Don’t write code that is not also documented

This is a plain and simple programming 101 issue.  Yes, it’s very simple.  Your software engineers’ responsibilities are to write robust code, but also document everything they write.  That’s their job responsibility and should be part of their job description.  If they do not, cannot or are unwilling to document the code they write, they should be put on a performance review plan and without improvement, walked to the door.  Without documentation, reverse engineering their code can take weeks for new personnel.  Documentation is critical to your businesses continued success, especially when personnel changes.  Think of this like you would disaster recovery.  If you suddenly no longer had your current engineers available and you had to hire all new engineers, how quickly could the new engineers understand your application’s code enough to release a new version?  This ends up a make or break situation.  Documentation is the key here.

Thus, documentation must be part of any engineer’s responsibility when they write code for your company.  Code review is equally important by management to ensure that the code not only seems reasonable (i..e, no gotos), but is fully documented and attributed to that person.  Yes, the author’s name should be included in comments surrounding each section of code they write and the date the code was written.  All languages provide ways to comment within the code, require your staff to use it.

Don’t expect your code to test itself or that your engineers will properly test it

Your software engineers are far too close to the code to determine if the code works correctly under all scenarios.  Plain and simple, software doesn’t test itself. Use an independent quality testing group to ensure that the code performs as expected based on the design specifications.  Yes, always test based on the design specifications.  Clearly, your company should have a road map of features and exactly how those features are expected to perform.  These features should be driven by customer requests for new features.  Your quality assurance team should have a list of new all features being placed into each new release to write thorough test cases well in advance.  So, when the code is ready, they can put the release candidate into the testing environment and run through their test cases.  As I said, don’t rely on your software engineers to provide this level of test cases.  Use a full quality assurance team to review and sign off on the test cases to ensure that the features work as defined.

Don’t expect code to write (or fix) itself

Here’s another one that would be seemingly self-explanatory.  Basically, when a feature comes along that needs to be implemented, don’t expect the code to spring up out of nowhere.  You need competent technical people who fully understand the design to write the code for any new feature.  But, just because an engineer has actually written code doesn’t mean the code actually implements the feature.  Always have test cases ready to ensure that the implemented feature actually performs the way that it was intended.

If the code doesn’t perform what it’s supposed to after having been implemented, obviously it needs to be rewritten so that it does.  If the code written doesn’t match the requested feature, the engineer may not understand the requested feature enough to implement it correctly.  Alternatively, the feature set wasn’t documented well enough before having been sent to the engineering team to be coded.  Always document the features completely, with pseudo-code if necessary, prior to being sent to engineering to write actual code.  If using an agile engineering approach, review the progress frequently and test the feature along the way.

Additionally, if the code doesn’t work as expected and is rolled to production broken, don’t expect that code to magically start working or that the production team has some kind of magic wand to fix the problem.  If it’s a coding problem, this is a software engineering task to resolve.  Regardless of whether or not the production team (or even a customer) manages to find a workaround is irrelevant to actually fixing the bug.  If a bug is found and documented, fix it.

Don’t let your software engineers design features

Your software engineers are there to write the code based features derived from customer feedback.  Don’t let your software engineers write code for features not on the current road map. This is a waste of time and, at the same time, doesn’t help get your newest release out the door.  Make sure that your software engineers remain focused on the current set of features destined for the next release.  Focusing on anything other than the next release could delay that release.  If you’re wanting to stick to a specific release date, always keep your engineers focused on the features destined for the latest release.  Of course, fixing bugs from previous releases is also a priority, so make sure they have enough time to work on these while still working on coding for the newest release.  If you have the manpower, focus some people on bug fixing and others on new features.  If the code is documented well enough, a separate bug fixing team should have no difficulties creating patches to fix bugs from the current release.

Don’t expect to create 100% perfect code

So, this one almost goes without saying, but it does need to be said.  Nothing is ever bug free.  This section is here is to illustrate why you need to design your application using a modular patching approach.  It goes back to operations manageability (as stated above).  Design your application so that code modules can drop-in replace easily while the code is running.  This means that the operations team (or whomever is tasked to do your patching) simply drops a new code file in place, tells the system to reload and within minutes the new code is operating.  Modular drop in replacements while running is the only way to prevent major downtime (assuming the code is fully tested).  As an SaaS company, should always design your application with high availability in mind.  Doing full code releases, on the other hand, should have a separate installation process than drop in replacement.  Although, if you would like to utilize the dynamic patching process for more agile releases, this is definitely an encouraged design feature.  The more easily you design manageability and rapid deployment into your code for the operations team, the less operations people you need to manage and deploy it.

Without the distractions of long involved release processes, the operations team can focus on hardware design, implementation and general growth of the operations processes.  The more distractions your operations team has with regards to bugs, fixing bugs, patching bugs and general code related issues, the less time they have to spend on the infrastructure side to make your application perform its best.  As well, the operations team also has to keep up with operating system patches, software releases, software updates and security issues that may affect your application or the security of your user’s data.

Don’t overlook security in your design

Many people who write code, write code to implement a feature without thought to security.  I’m not necessarily talking about blatantly obvious things like using logins and passwords to get into your system.  Although, if you don’t have this, you need to add it. It’s clear, logins are required if you want to have multiple users using your system at once.  No, I’m discussing the more subtle but damaging security problems such as cross-site scripting or SQL injection attacks. Always have your site’s code thoroughly tested against a suite of security tools prior to release.  Fix any security problems revealed before rolling that code out to production.  Don’t wait until the code rolls to production to fix security vulnerabilities.  If your quality assurance team isn’t testing for security vulnerabilities as part of the QA sign off process, then you need to rethink and restructure your QA testing methodologies. Otherwise, you may find yourself becoming the next Sony Playstation Store news headline at Yahoo News or CNN.  You don’t really want this type of press for your company. You also don’t want your company to be known for losing customer data.

Additionally, you should always store user passwords and other sensitive user data in one-way encrypted form.  You can store the last 4 digits or similar of social security numbers or the last 4 of account numbers in clear text, but do not store the whole number in either plain text, with two-way encryption or in a form that is easily derived (md5 hash). Always use actual encryption algorithms with reasonably strong one-way encryption to store sensitive data.  If you need access to that data, this will require the user to enter the whole string to unlock whatever it is they are trying to access.

Don’t expect your code to work on terabytes of data

If you’re writing code that manages SQL queries or, more specifically, are constructing SQL queries based on some kind of structured input, don’t expect your query to return timely when run against gigabytes  or terabytes of data, thousands of columns or billions of rows or more.  Test your code against large data sets.  If you don’t have a large data set to test against, you need to find or build some.  It’s plain and simple, if you can’t replicate your biggest customers’ environments in your test environment, then you cannot test all edge cases against the code that was written.  SQL queries have lots of penalties against large data sets due to explain plans and statistical tables that must be built, if you don’t test your code, you will find that these statistical tables are not at all built the way you expect and the query may take 4,000 seconds instead of 4 seconds to return.

Alternatively, if you’re using very large data sets, it might be worth exploring such technologies as Hadoop and Cassandra instead of traditional relational databases to handle these large data sets in more efficient ways than by using databases like MySQL.  Unfortunately, however, Hadoop and Cassandra are noSQL implementations, so you forfeit the use of structured queries to retrieve the data, but very large data sets can be randomly accessed and written to, in many cases, much faster than using SQL ACID database implementations.

Don’t write islands of code

You would think in this day and age that people would understand how frameworks work.  Unfortunately, many people don’t and continue to write code that isn’t library or framework based.  Let’s get you up to speed on this topic.  Instead of writing little disparate islands of code, roll the code up under shared frameworks or shared libraries. This allows other engineers to use and reuse that code in new ways.  If it’s a new feature, it’s possible that another bit of unrelated code may need to pull some data from another earlier implemented feature.  Frameworks are a great way to ensure that reusing code is possible without reinventing the wheel or copying and pasting code all over the place.  Reusable libraries and frameworks are the future.  Use them.

Of course, these libraries and frameworks need to be fully documented with specifications of the calls before they can be reused by other engineers in other parts of the code.  So, documentation is critical to code reuse.  Better, the use of object oriented programming allows not only reuse, but inheritance.  So, you can inherit an object in its template form and add your own custom additions to this object to expand its usefulness.

Don’t talk and chew bubble gum at the same time

That is, don’t try to be too grandiose in your plans.  Your team has limited time between the start of a development cycle and the roll out of a new release.  Make sure that your feature set is compatible with this deadline.  Sure, you can throw everything in including the kitchen sink, but don’t expect your engineering team to deliver on time or, if they do actually manage to deliver, that the code will work half as well as you expect.  Instead, pair your feature sets down to manageable chunks.  Then, group the chunks together into releases throughout the year.  Set expectations that you want a certain feature set in a given release.  Make sure, however, that that feature set is attainable in the time allotted with the number of engineers that you have on staff.  If you have a team of two engineers and a development cycle of one month, don’t expect these engineers to implement hundreds of complex features in that time.  Be realistic, but at the same time, know what your engineers are capable of.

Don’t implement features based on one customer’s demand

If someone made a sales promise to deliver a feature to one, and only one customer, you’ve made a serious business mistake.  Never promise an individual feature to an individual customer.  While you may be able to retain that customer based on implementing that feature, you will run yourself and the rest of your company ragged trying to fulfill this promise.  Worse, that customer has no loyalty to you.  So, even if you expend the 2-3 weeks day and night coding frenzy to meet the customer’s requirement, the customer will not be any more loyal to you after you have released the code.  Sure, it may make the customer briefly happy, but at what expense?  You likely won’t keep this customer as a customer any longer.  By the time you’ve gotten to this level of desperation with a customer, they are likely already on the way out the door.  So, these crunch requests are usually last-ditch efforts at customer retention and customer relations.  Worse, the company runs itself ragged trying desperately to roll this new feature almost completely ignoring all other customers needing attention and projects, yet these harried features so completely end up as customized one-offs that no other customer can even use the feature without a major rewrite.  So, the code is effectively useless to anyone other than the requesting customer who’s likely within inches of terminating their contract.  Don’t do it.  If your company gets into this desperation mode, you need to stop and rethink your business strategy and why you are in business.

Don’t forget your customer

You need to hire a high quality sales team who is attentive to customer needs.  But, more than this, they need to periodically talk to your existing clients on customer relations terms.  Basically, ask the right questions and determine if the customer is happy with the services.  I’ve seen so many cases where a customer appears completely happy with the services.  In reality, they have either been shopping around or have been approached by competition and wooed away with a better deal.  You can’t assume that any customer is so entrenched in your service that they won’t leave.  Instead, your sales team needs to take a proactive approach and reach out to the customers periodically to get feedback, determine needs and ask if they have any questions regarding their services.  If a contract is within 3 months of renewal, the sales team needs to be on the phone and discussing renewal plans.  Don’t wait until a week before the renewal to contact your customers.  By a week out, it’s likely that the customers have already been approached by competition and it’s far too late to participate in any vendor review process.  You need to know when the vendor review process happens and always submit yourself to that process for continued business consideration from that customer.  Just because a customer has a current contract with you does not make you a preferred vendor.   More than this, you want to always participate in the vendor review process, so this is why it’s important to contact your customer and ask when the vendor review process begins.  Don’t blame the customer that you weren’t included in any vendor review and purchasing process.  It’s your sales team’s job to find out when vendor reviews commence.

Part 2 | Part 4 | Chapter Index Page

Tagged with: ,

Amazon Kindle: Buyer’s Security Warning

Posted in best practices, computers, family, security, shopping by commorancy on May 4, 2012

If you’re thinking of purchasing a Kindle or Kindle Fire, beware. Amazon ships the Kindle pre-registered to your account in advance while the item being shipped. What does that mean? It means that the device is ready to make purchases right from your account without being in your possession. Amazon does this to make it ‘easy’. Unfortunately, this is a huge security risk. You need to take some precautions before the Kindle arrives.

Why is this a risk?

If the package gets stolen, it becomes not only a hassle to get the device replaced, it means the thief can rack up purchases for that device from your Amazon account on your registered credit card without you being immediately aware. The bigger security problem, however, is that the Kindle does not require a login and password to purchase content. Once registered to your account, it means the device is already given consent to purchase without any further security. Because the Kindle does not require a password to purchase content, unlike the iPad which asks for a password to purchase, the Kindle can easily purchase content right on your credit card without any further prompts. You will only find out about the purchases after they have been made through email receipts. At this point, you will have to dispute the charges with Amazon and, likely, with your bank.

This is bad on many levels, but it’s especially bad while the item is in transit until you receive the device in the mail. If the device is stolen in transit, your account could end up being charged for content by the thief, as described above. Also, if you have a child that you would like to use the device, they can also make easy purchases because it’s registered and requires no additional passwords. They just click and you’ve bought.

What to do?

When you order a Kindle, you will want to find and de-register that Kindle (may take 24 hours before it appears) until it safely arrives into your possession and is working as you expect. You can find the Kindles registered to your account by clicking (from the front page while logged in) ‘Your Account->Manage Your Kindle‘  menu then click ‘Manage Your Devices‘ in the left side panel. From here, look for any Kindles you may have recently purchased and click ‘Deregister’. Follow through any prompts until they are unregistered. This will unregister that device. You can re-register the device when it arrives.

If you’re concerned that your child may make unauthorized purchases, either don’t let them use your Kindle or de-register the Kindle each time you give the device to your child. They can use the content that’s on the device, but they cannot make any further purchases unless you re-register the device.

Kindle as a Gift

Still a problem. Amazon doesn’t recognize gift purchases any differently. If you are buying a Kindle for a friend, co-worker or even as a giveaway for your company’s party, you will want to explicitly find the purchased Kindle in your account and de-register it. Otherwise, the person who receives the device could potentially rack up purchases on your account without you knowing.

Shame on Amazon

Amazon should stop this practice of pre-registering Kindles pronto. All Kindles should only register to the account after the device has arrived in the possession of the rightful owner. Then, and only then, should the device be registered to the consumer’s Amazon account as part of the setup process using an authorized Amazon login and password (or by doing it in the Manage devices section of the Amazon account). The consumer should be the sole responsible party to authorize all devices to their account. Amazon needs to stop pre-registering of devices before the item ships. This is a bad practice and a huge security risk to the holder of the Amazon account who purchased the Kindle. It also makes gifting Kindles extremely problematic. Amazon, it’s time to stop this bad security practice or place more security mechanisms on the Kindle before a purchase can be made.

Tagged with: , , ,

When Digital Art Works Infringe

Posted in 3D Renderings, art, best practices, computers, economy by commorancy on March 12, 2012

What is art?  Art is an image expression created by an individual using some type of media.  Traditional media typically includes acrylic paint, oil paint, watercolor, clay or porcelain sculpture, screen printing, metal etching and printing, screen printing or any of any other tangible type media.  Art can also be made from found objects such as bicycles, inner tubes, paper, trash, tires, urinals or anything else that can be found and incorporated.  Sometimes the objects are painted, sometimes not.  Art is the expression once it has been completed.

Digital Art

So, what’s different about digital art?  Nothing really.  Digital art is still based on using digital assets including software and 3D objects used to produce pixels in a 2D format that depicts an image.  Unlike traditional media, digital media is limited to flat 2D imagery when complete (unless printed and turned into a real world object.. which then becomes another form of ‘traditional found art media’ as listed above).

Copyrights

What are copyrights?  Copyrights are rights to copy a given specific likeness of something restricting usage to only those that have permission.  That is, an object or subject either real-world or digital-world has been created by someone and any likeness of that subject is considered copyright.  This has also extended to celebrities in that their likenesses can also be considered copyright by the celebrity.  That is, the likeness of a copyrighted subject is controlled strictly by the owner of the copyright.  Note that copyrights are born as soon as the object or person exists.  These are implicit copyrights.  These rights can be explicitly defined by submitting a form to the U.S. Copyright office or similar other agencies in other parts of the world.

Implicit or explicit, copyrights are there to restrict usage of that subject to those who wish to use it for their own gain.  Mickey Mouse is a good example of a copyrighted property.  Anyone who creates, for example, art containing a depiction of Mickey Mouse is infringing on Disney’s copyright if no permission was granted before usage.

Fair Use

What is fair use?  Fair use is supposed to be a way to use copyrighted works that allows for usage without permission.  Unfortunately, what’s considered fair use is pretty much left up to the copyright owner to decide.  If the copyright holder decides that a depiction is not considered fair use, it can be challenged in a court of law.  This pretty much means that any depiction of any copyrighted character, subject, item or thing can be challenged in a court of law by the copyright holder at any time.  In essence, fair use is a nice concept, but it doesn’t really exist in practice.  There are clear cases where a judge will decide that something is fair use, but only after ending up in court.  Basically, fair use should be defined so clearly and completely that, when something is used within those constraints, no court is required at all. Unfortunately, fair use isn’t defined that clearly.  Copyrights leave anyone attempting to use a copyrighted work at the mercy of the copyright holder in all cases except when permission is granted explicitly in writing.

Public Domain

Public domain is a type of copyright that says there is no copyright.  That is, the copyright no longer exists and the work can be freely used, given away, sold, copied or used in any way without permission to anyone.

3D Art Work

When computers first came into being with reasonable graphics, paint packages became common.  That is, a way to push pixels around on the screen to create an image.  At first, most of the usage of these packages were for utility (icons and video games).  Inevitably, this media evolved to mimic real world tools such as chalk, pastels, charcoal, ink, paint and other media.  But, these paint packages were still simply pushing pixels around on the screen in a flat way.

Enter 3D rendering.  These packages now mimic 3D objects in a 3D space.  These objects are placed into a 3D world and then effectively ‘photographed’.  So, 3D art has more in common with photography than it does painting.  But, the results can mimic painting through various rendering types.  Some renderers can simulate paint strokes, cartoon outlines, chalk and other real world media.  However, instead of just pushing pixels around with a paint package, you can load in 3D objects, place them and then ‘photograph’ them.

3D objects, Real World objects and Copyrights

All objects become copyrighted by the people who create them.  So, a 3D object may or may not need permission for usage (depending on how they were copyrighted).  However, when dealing with 3D objects, the permissions for usage of 3D objects are usually limited to copying and distribution of said objects.  Copyright does not generally cover creating a 3D rendered likeness of an object (unless, of course, the likeness happens to be Mickey Mouse) in which case it isn’t the object that’s copyrighted, but the subject matter. This is the gray area surrounding the use of 3D objects.  In the real world, you can run out and take a picture of your Lexus and post this on the web without any infringement.  In fact, you can sell your Lexus to someone else, because of the First Sale Doctrine, even though that object may be copyrighted.  You can also sell the photograph you took of your Lexus because it’s your photograph.

On the other hand, if you visit Disney World and take a picture of a costumed Mickey Mouse character, you don’t necessarily have the right to sell that photograph.  Why?  Because Mickey Mouse is a copyrighted character and Disney holds the ownership on all likenesses of that character.  You also took the photo inside the park which may have photographic restrictions (you have to read the ticket). Yes, it’s your photograph, but you don’t own the subject matter, Disney does.  Again, a gray area.  On the other hand, if you build a costume from scratch of Mickey Mouse and then photograph yourself in the costume outside the park, you still may not be able to sell the photograph.  You can likely post it to the web, but you likely can’t sell it due to the copyrighted character it contains.

In the digital world, these same ambiguous rules apply with even more exceptions.  If you use a 3D object of Mickey Mouse that you either created or obtained (it doesn’t really matter which because you’re not ultimately selling or giving away the 3D object) and you render that Mickey Mouse character in a rendering package, the resulting 2D image is still copyrighted by Disney because it contains a likeness of Mickey Mouse.  It’s the likeness that matters, not that you used an object of Mickey Mouse in the scene.

Basically, the resulting 2D image and the likeness it contains is what matters here.  If you happened to make the 3D object of Mickey Mouse from scratch (to create the 2D image), you’re still restricted.  You can’t sell that 3D object of Mickey Mouse either.  That’s still infringement.  You might be able to give it away, though, but Disney could still balk as it was unlicensed.

But, I bought a 3D model from Daz…

“am I not protected?” No, you’re not.  If you bought a 3D model of the likeness of a celebrity or of a copyrighted character, you are still infringing on that copyrighted property without permission.  Even if you use Daz’s own Genesis, M4 or other similar models, you could still be held liable for infringement even from the use of those models.  Daz grants usage of their base models in 2D images.  If you dress the model up to look like Snow White or Cruella DeVille from Disney’s films, these are Disney owned copyrighted characters.  If you dress them up to look like Superman, same story from Warner Brothers.  Daz’s protections only extend to the base figure they supply, but not once you dress and modify them.

The Bottom Line

If you are an artist and want to use any highly recognizable copyrighted characters like Mickey Mouse, Barbie, G.I. Joe, Spiderman, Batman or even current celebrity likenesses of Madonna, Angelina Jolie or Britney in any of your art, you could be held accountable for infringement as soon as the work is sold.  It may also be considered infringement if the subject is used in an inappropriate or inconsistent way with the character’s personality.  The days of Andy Warhol are over using celebrity likenesses in art (unless you explicitly commission a photograph of the subject and obtain permission to create the work).

It doesn’t really matter that you used a 3D character to simulate the likeness or who created that 3D object, what matters is that you produced a likeness of a copyrighted character in a 2D final image.  It’s that likeness that can cause issues.  If you intend to use copyrighted subject matter of others in your art, you should be extra careful with the final work as you could end up in court.

With art, it’s actually safer not to use recognizable copyrighted people, objects or characters in your work.  With art, it’s all about imagination anyway.  So, use your imagination to create your own copyrighted characters.  Don’t rely on the works of others to carry your artwork as profit motives are the whole point of contention with most copyright holders anyway.  However, don’t think you’re safe just because you gave the work away for free.

3D TV: Flat cutouts no more!

Posted in computers, entertainment, movies, video gaming by commorancy on February 18, 2012

So, I’ve recently gotten interested in 3D technology. Well, not recently exactly, 3D technologies have always fascinated me even back in the blue-red glasses days. However, since there are new technologies that better take advantage of 3D imagery, I’ve recently taken an interest again. My interest was additionally sparked by the purchase of a Nintendo 3DS. With the 3DS, you don’t need glasses as the technology uses small louvers to block out the image to each eye.  This is similar to lenticular technologies, but it doesn’t use prisms for this.  Instead, small louvers block light to each eye.  Not to get into too many technical details, the technology works reasonably well, but requires viewing the screen at a very specific angle or the effect breaks down.  For portable gaming, it works ok, but because of the very specific viewing angle, it breaks down further when the action in the game gets heated and you start moving the unit around.  So, I find that I’m constantly shifting the unit to get it back into the proper position which is, of course, very distracting when you’re trying to concentrate on the game itself.

3D Gaming

On the other hand, I’ve found that with the Nintendo 3DS, the games appear truly 3D.  That is, the objects in the 3D space appear geometrically correct.  Boxes appear square.  Spheres appear round.  Characters appear to have the proper volumes and shapes and move around in the space properly (depth perception wise).  All appears to work well with 3D games.  In fact, the marriage of 3D technology works very well with 3D games. Although, because of the specific viewing angle, the jury is still out whether it actually enhances the game play enough to justify it.  However, since you can turn it off or adjust 3D effect to be stronger or weaker, you can do some things to reduce the specific viewing angle problem.

3D Live Action and Films

On the other hand, I’ve tried viewing 3D shorts filmed with actual cameras.  For whatever reason, the whole filmed 3D technology part doesn’t work at all.  I’ve come to realize that while the 3D gaming calculates the vectors exactly in space, with a camera you’re capturing two 2D images only slightly apart.  So, you’re not really sampling enough points in space, but just marrying two flat images taken a specified distance.  As a result, this 3D doesn’t truly appear to be 3D.  In fact, what I find is that this type of filmed 3D ends up looking like flat parallax planes moving in space.  That is, people and objects end up looking like flat cardboard cutouts.  These cutouts appear to be placed in space at a specified distance from the camera.  It kind of reminds me of a moving shadowbox.  I don’t know why this is, but it makes filmed 3D quite less than impressive and appears fake and unnatural.

At first, I thought this to be a problem with the size of the 3DS screen.  In fact, I visited Best Buy and viewed a 3D film on both a large Samsung and Sony monitor.  To my surprise, the filmed action still appeared as flat cutouts in space.  I believe this is the reason why 3D film is failing (and will continue to fail) with the general public.  Flat cutouts that move in parallax through perceived space just doesn’t cut it. We don’t perceive 3D in this way.  We perceive 3D in full 3D, not as flat cutouts.  For this reason, this triggers an Uncanny Valley response from many people.  Basically, it appears just fake enough that we dismiss it as being slightly off and are, in many cases, repulsed or, in some cases, physically sickened (headaches, nausea, etc).

Filmed 3D translated to 3D vector

To resolve this flat cutout problem, film producers will need to add an extra step in their film process to make 3D films actually appear 3D when using 3D glasses.  Instead of just filming two flat images and combining them, the entire filming and post processing step needs to be reworked.  The 2D images will need to be mapped onto a 3D surface in a computer.  Then, these 3D environments are then ‘re-filmed’ into left and right information from the computer’s vector information.  Basically, the film will be turned into 3D models and filmed as a 3D animation within the computer. This will effectively turn the film into a 3D vector video game cinematic. Once mapped into a computer 3D space, this should immediately resolve the flat cutout problem as now the scene is described by points in space and can then be captured properly, much the way the video game works.  So, the characters and objects now appear to have volume along with depth in space.  There will need to be some care taken for the conversion from 2D to 3D as it could look bad if done wrong.  But, done correctly, this will completely enhance the film’s 3D experience and reduce the Uncanny Valley problem.  It might even resolve some of the issues causing people to get sick.

In fact, it might even be better to store the film into a format that can be replayed by the computer using live 3D vector information rather than baking the computer’s 3D information down to 2D flat frames to be reassembled later. Using film today is a bit obsolete anyway.  Since we now have powerful computers, we can do much of this in real-time today. So, replaying 3D vector information overlaid with live motion filmed information should be possible.  Again, it has the possibility of looking really bad if done incorrectly.  So, care must be taken to do this properly.

Rethinking Film

Clearly, to create a 3D film properly, as a filmmaker you’ll need to film the entire scene with not just 2 cameras, but at least 6-8 either in a full 360 degree rotation or at least 180 degrees.  You’ll need this much information to have the computer translate to a believable model on the computer.  A model that can be rotated around using cameras placed in this 3D space so it can be ‘re-filmed’ properly.  Once the original filmed information is placed onto the extruded 3D surface and the film is then animated onto these surfaces, the 3D will come alive and will really appear to occupy space.  So, when translated to a 3D version of the film, it no longer appears like flat cutouts and now appears to have true 3D volumes.

In fact, it would be best to have a computer translate the scene you’re filming into 3D information as you are filming.  This way, you have the vector information from the actual live scene rather than trying to extrapolate this 3D information from 6-8 cameras of information later.  Extrapolation introduces errors that can be substantially reduced by getting the vector information from the scene directly.

Of course, this isn’t without cost because now you need more cameras and a filming computer to get the images to translate the filmed scene into a 3D scene in the computer.  Additionally, this adds the processing work to convert the film into a 3D surface in the computer and then basically recreate the film a second time with the extruded 3D surfaces and cameras within the 3D environment.  But, a properly created end result will speak for itself and end the flat cutout problem.

When thinking about 3D, we really must think truly in 3D, not just as flat images combined to create stereo.  Clearly, the eyes aren’t tricked that easily and more information is necessary to avoid the flat cutout problem.

%d bloggers like this: