April 3, 2003
March 30, 2003
OS X Network Prep
created 19 Feb 2001
Preparing your network for Mac OS X
So last time we talked about general preparation work for the full OS X release. This time, we are going to take a look at one of the more specific items you will need to prepare, specifically your network.
OS X is a much more networkable, and network-oriented operating system than previous editions of the MacOS. Indeed, if you look at the lowest level of your system in the OS X Finder, you see two things, the hard drive volumes, and an item called Network. As you add OS X to your environment, you will find your network being used much more heavily and constantly than before. In my own tests, 200MB file transfers proceed smoothly, and quickly, even if you are doing three or four at once.
Not only does MacOS X support AppleShareIP, but you also have things like NFS, Network File System to think about. NFS, the traditional way of networking Unix partitions, is designed to be transparent. In other words, in a traditional Unix you should be able to use NFS without noticing that you are working off a different machine than the one in front of you. Now, in the Apple UI environment, this is not the way things work, and I have yet to see that NFS will be any different. But the point is, that in a Unix-based environment, you can easily be running major applications across a network, while you are running heavy apps locally. Depending on your environment, you may have other people logging into your OS X machine, and using applications they need that are only on your machine.
This is doubled when you consider, that while NFS is primarily for drive access, when compared to products such as XTools, from Tenon, or one of the XFree86 ports, you can now have multiple users on multiple operating systems making use of your Mac's resources from all across your network. So if you have a copy of the GiMP, the open source image manipulation program, on your Mac, someone on a different Mac is going to be able to use it, if you set things up correctly. This is also one of the more exiting features of the BSD base of OS X. The fact that the Mac is no longer a closed box to the rest of the world, or at least your network. If you need more horsepower for a given task, and you have a bright, shiny G4 sitting on a VPs desk, you can now use it while he's not there. The inherent economies that an integrated network gives to Unix are now going to be available to the Mac.
You also have Samba, which is going to provide services from OS X for Windows users. While you aren't going to be running windows binaries on your Mac with Samba, the opposite could happen with PC users. Have a group of PC users that need to get to data on an OS X Mac? No problem, Samba is there. Have a bunch of Mac OS X users that need to use data on Windows boxes? DAVE, from Thursby Systems, and Sharity, from Objective Development are here for you, or will be here for you native on OS X.
Because OS X is designed to make such extensive networking so easy, you have to take a serious look at your infrastructure.
First of all, if you are still using hubs, start looking at switches. Even with a gigabit backbone, shared bandwidth will bring your network to it's knees. Switches also allow you to consider such other networking features, such as virtual LANs, vLANs, Quality of Service, QoS monitoring and allocation, better use of Simple Network Management Protocol, SNMP features, etc. Hubs are cheaper than switches, but you are going to quickly be paying a much higher price in wasted bandwidth and lost capabilities.
Secondly, start looking at making your minimum internal wired connection speed 100Mbps Fast Ethernet. It's standard on every desktop since the iMac, and on every laptop since the 1999 G3 PowerBooks. It's not noticeably more expensive to implement than 10Mbps Ethernet, and the time you save on backups alone will make up for any increased costs.
Take this as a chance to look at upgrading wiring if you can. Not just new wires, but organize things, trace things, find out why you still need that odd connection that's just there. Look into management packages, such as InterMapper, LanSurveyor, netOctopus, Timbuktu Pro, etc. If you are having weird network problems, this is the time to get a product like EtherPeek, and check them out.
MacOS X is going to be a networked OS like nothing the Mac community has seen before. (And no, A/UX doesn't count, neither does AIX on the Network Servers.) If you take the time to plan your infrastructure upgrades, not only will your OS X users benefit, but other OS's on your network will see the benefits as well. In addition, if you take this chance to organize your network, you may find that what you have is a lot better than you thought. Finally, things like video conferencing, internal usage of streaming media, network attached storage, Storage Area Networks are not going to go away. If nothing else, they may get more intense. Preparing now is like getting a ton of cure for that same ounce of preparation.
OS X Prep
created 11 Feb 2001
Preparing for Mac OS X
Now that the GM release of MacOS X is only about six weeks off, network administrators are now faced with how to really handle this new OS. A lot of them are quite concerned with how to handle it. If you are in a pure desktop environment, you have more ways to control the roll out of new environments, those of us with laptop users have to face the fact that some of our users simply will not be able to keep their hands off a new sparkily toy.
So we have to start planning for March 24th now. This means a lot more to administrators with hundreds, or even thousands of Macs than it does to one with only thirty or forty. But nonetheless, there are still some things that apply equally to all.
First of all, recognize that this OS will require more training than any previous release since perhaps System 7. The UI is different, the operational modality is different, the way it's designed to be used is different. Whether you like what has been done to the interface or not, the fact is, OS X is not the same OS we are all used to, and if you are one of the companies that is using NeXTStep or OpenStep, it's not the same interface you are used to either. So you are going to need to take special care to remind users that simply slapping this OS on their Mac is not going to be like going from OS 8.5 to OS 9.0. Ask them to be patient, so that you will have the time you need to deal with all the new issues that OS X is going to raise, and help ensure that they will have a smoother transition to the new OS.
You are going to need to inventory the applications you and your users depend on, and find out what those vendors are doing as far as Carbon, and / or Cocoa versions of those applications. One of the best resources for checking on updates to Mac applications in general, but OS X applications in particular is VersionTracker's OS X page. If an application is running native in the public beta, then start working with it now, see what's changed and what hasn't. As an administrator, you are going to be a, if not the focal point for questions on the new OS, so the more knowledge you have, the better off you are. If you can, start accumulating applications that will help you now, so that you are ready to go when March 24th rolls around. If you have a good relationship with a software vendor, see if you can get on any beta teams for the OS X version of their products. It may not seem like a good use of your time at first, but having hands on experience with a beta release often gives you not only a heads up on any changes in the new version, (and there will always be changes; as a certain science fiction doctor once said, "I know engineers, they just love to change things."), but quite often gives you a chance to find the lesser known features of a product that really take it from merely handy to indispensable. This in turn, helps you help your users get the most out of the new release sooner than they normally would have.
You are also going to need to review your procedures that deal with backup and recovery of user systems. With any new interface, people are going to make the 'newbie' mistakes that they wouldn't normally make. They're going to lose or delete things, or do other minorly catastrophic things that they wouldn't normally do. So if you've been getting by with minimal backups, this is perhaps the time to rethink that. Get familiar with not only the traditional MacOS ways of doing things, but take a look at Unix-based ways of accomplishing the same tasks. While no one way is absolutely perfect, clever combinations of methodologies can produce excellent results, in ways you may not have anticipated.
When I said that OSX will require more training, I wasn't only speaking of user issues. Adminstrators as well are going to have to get at least comfortable, if not adept at using the functionality provided by the BSD plumbing in OS X. While users should never have to use a command line, or the Unix layer directly, as an Administrator, you are missing out on a lot of features and time savers if you do not start learning how to use these things. Remote administration, in OS X, is a lot more capable out of the box than the current MacOS. The first time you are easily able to SSH into a box, and kill a runaway process, so that the user doesn't have to reboot the Mac, you'll know what I mean. (If you don't know what SSH stands for, or how it is used, that's an excellent starting point.) Things like 'top', and 'ps -f |more', and 'kill -9' are going to be a part of your vocabulary. You can resist them as alien concepts that Mac administrators have no need to know, or embrace them as valuable new additions for your toolbox. Personally, I think the latter is the better way to go. You also will find new ways to use existing tools. Those of you familiar with AppleScript are going to find that you have a new world to work with, as you find ways to link the power of AppleScript and shell scripts to get things done. Is it going to be like falling off a boat? Nope, but it'll be worth the work in the end. Change is happening, and we need to start planning for it, not avoiding it, or hoping that it will fix itself for us.
In the next couple of columns, I'm going to start presenting things that administrators need to look at for OS X, and the new challenges it will present us with.
created 7 Dec 2000
The Pentium 4
For some time now, the Mac community has been in a state over the lack of clock-speed improvements in the G4. "We know that's not important, and that the G4 is faster than the Pentium III, but it looks bad, and besides, what happens when the Pentium 4 comes out and is running at 1.4GHz or faster?"
Well, Intel finally released the Pentium 4, and so far, the results have been disappointing to say the least. Simply put, unless you are running Quake III or have the 2 pieces of software that have been rewritten for the Pentium 4's SIMD extensions, you will see almost no improvement over a Pentium III. This is great news for AMD, as the Athlon is easily beating the Pentium 4 at similar clock speeds.
But this is even better news for Apple and Motorola. If you read the reviews, the reasons for the failure of the Pentium 4 to offer any real reason to upgrade can be laid squarely at Intel's "clock speed is all" philosophy. There aren't any real improvements in the Pentium 4 in any area other than allowing for faster clock speeds except for some improvements in the SIMD units.
SIMD is Intel's take on Altivec, although looking at the implementation, Motorola's experience as a DSP maker helped them create a much better implementation. But as with Altivec, unless you recode for the SIMD units, you aren't going to see an improvement there.
What Intel did do is double the pipeline size and added some execution caching steps and other clock speed - related improvements. This is all neat, but it relies on much faster clock speeds, and RAMBUS to work. But in almost any test, even when the Pentium 4 does beat the Pentium III, or the Athlon, if you compare results at the same clock speeds, it's by 5 percent or less in many of the tests. This is not the amazing performance boost that the Pentium 4 is supposed to have. But now we hear Intel saying, "Well, when we get the Pentium 4 to 2GHz, then you'll see the performance increase". Well I would hope that doubling the clock speed gives you something.
But what do you pay for when you put clock speed over everything else? Remember, engineering is a balancing act. For every gain, there is a cost. The trick is, having enough gain to outweigh the costs. Has Intel done this? Well, in many real-world ways, no.
First of all, the Pentium 4 has different power requirements, which require new power supplies. This is annoying more than anything else, and in my world not even a blip, as it's not worth my time to upgrade machines. I end up saving money by buying new machines over trying to do CPU upgrades. But for a home user, it's another expense to consider. Secondly, the Pentium 4 dissipates a lot of power. As in 55 watts of power. To compare, the latest version of the G4 from Motorola only dissipates 6 watts max, and even the previous model G4 dissipates 8 watts max. This is a huge difference. The results of this are that the heat sinks are big enough that Intel is now specifying motherboard attachments for them, so they don't damage the chips, which has happened with some of the bigger heat sinks on the Athlons. This means new motherboards, and new cases for most. So the upgrade path to a Pentium 4 is essentially a new computer.
But here's another issue with the heat output of the X86 architecture: Noise. Most Pentium IIIs and Athlons are generating, (due to the number of fans needed to keep those beasts cool), something like 55 to 60 decibels, (dB) of noise. This is similar to heavy traffic noise levels. Now multiply that times the number of computers in an area with cubicles. This is not minor, some of these boxes sound like DC-3s taking off. In contrast the Cube, or the iMac, or even a G4 Tower are much quieter, almost silent. Even the fan on the G4 Tower's power supply is very quiet compared to the three or so fans in a Pentium III box. A constant ~60dB noise level for eight hours a day, every day is not a good way to work, and with the Pentium 4, it's not going to get any better. (This would make an interesting ad campaign. "The Cube, because it won't make you deaf".
The upshot of this, is that for the first time, rigid focus on clock speed is burning Intel. Will the chip flop? Hardly, Intel is too pervasive for that. But it is finally making people say, "Why do I need this?" They are tired of spending money every six months for low-level speed gains that just don't show up in daily use. I type all my articles in BBEdit on a 400MHz G3 PowerBook. I've done a couple on a G4 at work. Other than a slightly faster speed when opening BBEdit, I'm not working any faster at 500MHz than I am at 400MHz. I just am not going to ever type fast enough to bog down a modern processor.
Does Motorola need to get the G4 running faster? Certainly, Mac users as a group hammer their machines harder than typical Wintel users. But I for one am more impressed with faster done correctly, and as part of an overall improvement, rather than a goal unto itself. There is much more to a computer than clock speed, and this truth has finally caught up with Intel. Hopefully, some of the naysayers are watching.
Netscape 6 Final
created 19 Nov. 2000
Netscape 6, Is It Really Worth The Effort?
Okay, so here it is, after a few years of waiting, the successor to Netscape 4. Finally out of beta, I've been able to work with it for a while now, and the impressions, unfortunately, are consistent with my earlier impressions of the various PR, (Public Release) versions.
The installation went smoothly enough, although it did take four or five attempts for the initial download. I'm not going to get upset about this, I'm willing to bet that every router that's ever heard of netscape.com is being hammered pretty heavily right now. The install creates about 480 items in the Netscape Folder, and it takes up about 29 MB of drive space. This is less space than Communicator 4.7.X, although about twice as many items, and more space and files than for Outlook Express 5 and Internet Explorer 5. One instantly annoying part of Netscape 6 is that thanks to a new plugin architecture, which is evidently incompatible with 4.7.X's plugins, Netscape ships with only Java and Shockwave plugins, no Quicktime, or any other plugins are evidently available.
Initial impressions are the same as the PR versions. The HTML rendering is faster than 4.7.X, but not noticeably faster than IE5, with the exception of long text pages, which has always been a sore spot with IE. The appearance of Netscape 6 can be changed via Netscape's themes mechanism, and the final release ships with 2 themes, the new theme associated with Netscape 6, and the older, 4.7.X-ish theme. I personally don't have a problem with the new appearance, and find it cleaner in some ways than the 4.7.X appearance.
One problem I do have is with the fact that the interface itself is slow and buggy. Popup text hints for button usage take longer than for IE, and the amount of artifacts that get left behind by either the popup text, the dropdown lists, etc. are much higher than is acceptable for a non-beta product. Netscape's insistence on using a common interface is part of the problem here, but a bigger problem is that it appears that in its quest to be platform neutral, Netscape decided that it would be better to completely duplicate all window, and scrolling routines that already exist in any OS. The MacOS Appearance Manager settings are ignored, and a severe speed penalty in scrolling long pages seems to be one of the results of this. Other interface bugs include the inability, at least on my PowerBook's external monitor, to widen web and email windows past the width of the PowerBook's LCD screen. When switching between windows within Netscape, the refresh of the windows can take up to 3 seconds, which is annoyingly slow. Scrolling is jumpy, much more so than in IE or 4.7.X. Another entry in the interface/OS incompatibility problem list is the utter ignoring of the Internet Control panel/Internet Config that Netscape does. Considering that 4.7.5 finally started to use the MacOS internet preferences settings, and that Netscape 6 ships with out any pre-defined helper applications, this distancing of the product from the interface is all the more annoying.
But the worst offenses are in the area of keyboard equivalents. Now, to mark a message as read, instead of the 4.7.X keyboard command of CMD-/, the command is the letter 'M'. Not CMD-M, (which is still 'New Message'), or CTRL-M, or even OPT-M. Just 'M'. To mark all messages in a folder as read, you use the letter 'A'. I'm amazed at this, as it is not too hard to see where you could accidently hit the letter 'A' in a window, and mark all messages as read without wanting to. Considering that other email clients allow you to use letters to quickly jump to other folders, the use of unmodified alphabet characters for program functions is mindboggling in the potential for causing all sorts of problems. This insistence on 'doing it all ourselves', on platform neutrality at all costs has caused far more problems than it will ever fix. It seems to have made the product much more complicated than it should be for any reason, and caused an even more serious issue in the area of RAM usage.
Netscape 6 doesn't just use RAM, it inhales it. I normally double the manufacturer's recommended RAM allocation for any application. So for Netscape 6, I set the minimum RAM to 20MB, and the preferred size to 40MB. Just sitting there, with only my home page open, Netscape was using 29.5MB of RAM, which tells me that the default preferred RAM settings are too low to be functional. By the end of 3 hours of use, including IMAP email, and my standard web browsing, Netscape 6 was using 47.3MB of a new allocated amount 60.7MB of RAM. So at this point, even my doubled RAM allocation was too low, and Netscape had vampired an additional 20MB of RAM from the system. Even with only one browser window, simply changing web sites, scrolling, and refreshing pages drove the RAM usage up an additional 1.7MB, and stayed there, even with the program doing nothing, for over an hour, no surfing, no emailing, nothing. This seems to indicate a pretty radical memory leak, as with nothing going on, Netscape should have returned RAM back to the system. As a comparison, for my normal usage, I have both Internet Explorer 5 and Microsoft Entourage set to use 16MB of RAM each, and that's the limits they stay within. Netscape starts out with 8MB more than both of those applications, and needs another 20+MB of RAM in addition to do the same amount of work. This is simply not acceptable behavior for an application that is in its fifth or sixth iteration, and especially not for an iteration that has been in development for over a year now.
When it comes to email, Netscape 6 isn't even able to keep up with its predecessors, much less its competitors. Netscape 4.7.X allowed me to download over 3000 IMAP email headers in under 5 minutes over a 33.6Kbps modem. Netscape 6, on a cable modem that was giving me T-1 level throughput took 50 minutes to download 1500 IMAP email headers. In both cases, the email server was the same Netscape/iPlanet IMAP server running on Solaris on a dual UltraSparc server. Netscape 6's email filtering capabilities are essentially unchanged over 4.7.X's, and when compared to the filtering capabilities of products such as Outlook Express, Eudora, Entourage, or even Emailer, quite pitiful. The lack of LDAP address book capability limits Netscape even further, by removing a valuable and useful way of condensing company and university - wide address books into one, easily administered and maintained listing. To be blunt, the lack of LDAP is forcing not only my company, but the company of almost every administrator I've talked to into starting the process of either removing Netscape from the list of supported software, or freezing Netscape at the 4.X.X level.
This is the first version of Netscape I find myself unable to recommend in any capacity, unless you are a web designer, and need to preview pages with Netscape's Gecko HTML engine. However, judging by the opinions of most web designers that I know, they are probably just going to redirect Netscape 6 users to pages that are compatible with Netscape 3. This is due to Netscape's overly rigid requirements that HTML conform to current W3C standards. Don't misunderstand me here, compliance with those standards should be the goal of every web designer. But there is a lot of old code out there that will never be revamped, for various fiscal and time reasons. Unfortunately, if your code used Netscape 4.X specific tags such as the layer tag, Netscape 6 won't work with those pages. Being compliant with the current CSS, DOM, XML and other standards is good, but to be incompatible with older versions borders on ridiculous. Considering the public lambasting that IE5 took over not being too accepting of older HTML, for Netscape to be even more restrictive with it's HTML engine is incomprehensible, especially considering their current market share.
I'm disappointed that AOL chose to release as final a product that is so obviously not a final release. It may have been a large helping of crow to release a PR4 version of Netscape 6, but it would have surely been less than the crow AOL is going to eat with the first service pack fix for Netscape 6. I'm also saddened at the way that AOL is ignoring, and even thumbing their nose at any of the non-home user market with this release. While the business and Higher Education markets may not have been AOL's traditional markets, they were Netscape's, especially higher education. But it looks like AOL has ceded these to Microsoft, Opera, and iCab. I don't see how they can afford to do this, unless they really mean for this to be the last commercial browser released under the Netscape name, and for Mozilla to take all of this function as their own. If so, I will mourn the passing of one of the companies that helped make the internet a tool for all of us.
Integrating OS X
created 15 Nov. 2000
Integrating OS X into existing network systems
In the previous articles in this series, we've looked at connecting the PB to specific types of servers, AppleShareIP, Windows, Unix. But there's another aspect of network connectivity, and that is integration. In other words, besides just connecting to specific machines, how well does OS X fit in with other network management schemes? The answer is, it depends.
If you are talking about a NetInfo network, which is OS X's native networking management scheme, the answer is, almost perfectly. This is what we expect of course. OS X ships with NetInfo built into the OS. Indeed, most of the essential parts of the OS are managed via NetInfo. If you are running a NetInfo network, then OS X will fit in perfectly, with very few integration issues.
The problem is, not many networks are based on NetInfo. This is not a technical failure on NetInfo's part. As I have been digging up information on NetInfo, and wrapping my head around it, I am very impressed by much of what it can do. As a directory service, it is easily as capable as LDAP, or Novell's NDS, and head and shoulders above Microsoft's Active Directory. It uses a heirarchal domain model, ala LDAP and NDS, and one NetInfo domain can contain as many computers, printers and users as your server configuration will allow. But again, not many places use NetInfo, so we have to look at how OS X fits into other vendors systems.
NIS, and NIS+ is the network management concept used primarily by Sun Microsystems. NIS, or Network Information Service allows for administrators to manage resources such as computers, printers, users, user rights, storage access, etc. from an NIS server. NIS runs on most Unix systems, and is widely used in the computing world. NIS+ is an enhancement to NIS that added encryption capabilities and security enhancements to NIS, and is usually only seen in Solaris networks, although it does retain backwards compatibility with NIS.
NetInfo in OS X can be configured to use NIS services, and ships with the basic components to set this up. I'm not going to go into the details here, as they can be quite extensive. An excellent 'howto' page can be found at http://www.bresink.de/osx/nis.html. This site not only covers the OS X public beta, but also MacOS X Server from 1.0 to 1.2, MacOS X DP4, and even Rhapsody DR2 for Intel, and there is some references to the Darwin OS as well. The NIS services in OS X currently allow for user/group management via NIS, and the site mentions how to set up the PB to automount any NIS home directories into the PB.
Having said that, there is almost nothing intuitive about setting up NIS in OS X, unless you have a solid background in NetInfo. Getting the necessary settings into NetInfo is a somewhat arcane and tedious process, and there is a very small, but necessary amount of config file editing required. As well, once you have set OS X up for NIS, if you boot it in a situation where you cannot connect to the NIS domain, then it will sit at the NIS part of the boot process, endlessly looking for the NIS domain. (It may time out eventually, I've only waited for a half-hour or so before rebooting.) This means that PowerBook owners, such as myself, get very good at EMACS and hostconfig files. Obviously, Apple needs to fix this process to a more intuitive way of connecting to NIS domains.
But once you get NIS working, it works fairly well, and doesn't seem to need a lot of care and feeding, which is the general idea.
The next network management system that OS X supports is LDAP, or Lightweight Directory Access Protocol. The support here is less extensive than NIS, seemingly limited to user login authentication. Part of this may be due to LDAP's relative lack of experience as a network management directory, so I expect this will improve. Most of the available information on using LDAP with NetInfo is in this TIL from Apple. Even though LDAP is a newcomer to the network management arena, many other directory and management services are based on, or compatible with LDAP to varying degrees, such as NIS, Novell, and Microsoft.
LDAP has the advantage of being a public RFC, and as such is 'owned' by no one company, and you can find LDAP servers that run on almost every OS available, including one that runs on the current MacOS, ClickMail Central Directory, from Gracion Software. If you are unfamiliar with LDAP, and would like to know more, Gracion's site is a good place to start, as it explains many of the basics of LDAP in a concise, understandable manner.
Third on our list of network management systems is Novell Directory Service, or NDS. This is not yet shipping, but was announced on November 7th. The actual product name is Native File Services for Macintosh, and will be a downloadable addon for NDS 5.X, and a native part of NDS 6.0. It promises to provide native support for MacOS clients on the server side, with no client software needed on the Macs. It will integrate Novell Modular Authentication Services with Apple's own authentication systems, and provide not only access to network storage, but user management and directory access as well. The product should be shipping in the first quarter of 2001, so you may be able to get a look at it at MacWorld Expo in San Francisco.
Again, this is only an announcement, not a shipping product, and as Mac administrators well know, especially with Novell's history of Mac support, much can change in 6 or so months, but it's a good announcement, and would give Novell an essentially uncontested foothold in the MacOS market. What that will translate to remains to be seen, but it's evidently quite tempting for Novell at least.
Our final entry is Active Directory from Microsoft. This is a quintessential Microsoft management product in that it really doesn't support much outside of Windows PCs beyond very basic file and print services. Luckily, it seems to support LDAP reasonably well, so you may be able to get away with having your OS X boxes treat the AD servers like LDAP servers. I have yet to really try this, but if anyone does, then please, let me know how it works.
I've probably left out a few other systems, but we've covered the 'big four' as far as MacOS X is concerned. It is good that systems other than NetInfo are supported natively, although the implementation procedures need a lot of work. The Novell announcement is good news for administrators using that system, or considering it, and if Microsoft's LDAP support is close to as complete as they indicate, then there is a way to at least partially integrate OS X into AD networks, although the reality of this remains to be seen. OS X is still a beta, so there is time for Apple to create proper interfaces for integrating with NIS and LDAP, and I would really like to see NetInfo made a lot more intuitive to use. I'd also like to see Apple release a LOT more documentation on NetInfo than it has. But the basics are there, so that's a good start.
OS X To Unix
created 30 Oct. 2000
Connecting OSX to Unix
Well, last time we talked about connecting the OS X PB to Windows machines, and the products available to test for that purpose. This time we are going to focus on connecting to Unix networks.
There is an interesting dichotomy here, as this is both some of the easiest, and most convoluted connections of the group.
First off, the standard Unix connection tools are all here, telnet, rsh, rlogin, ftp, nfs, lp, etc. Things like Telnet, rsh, rlogin are all currently command line only applications, although the Carbon version of MacTelnet is due quite soon now, and even in its current state, shows great promise, as it is scriptable, something the command line remote access tools are not.
This is of great benefit to those of us with time, effort and money invested in AppleScript, as porting those to various shell scripts would be non-trivial in both time and effort. For those of us who are used to things like shell scripts and PERL, the PB includes those capabilities as well, so that regardless of your scripting preferences, you have all the basic tools you need, although I imagine that AppleScript will give you better access to the GUI elements and the applications that aren't command line applications. Apple is being good about giving us low level access to AppleScript, so that regardless of your language choices, you should be able to integrate that language into AppleScript in some fashion.
FTP is there as well, again, no big shock, this is a Unix based OS. For those of you who prefer a GUI - based FTP client, (like me), there are products such as Transmit, from Panic. You can of course use web browsers for downloads, but I prefer the functionality from an FTP client. Although there are more Mac FTP clients available, as of yet, only Transmit has been Carbonized. As with things like MacTelnet, the primary advantages to a product like Transmit are that you don't need to start dealing with command lines, and you get AppleScript.
The next connectivity issue is NFS. At the moment, this is primarily one way, that is, the PB can easily mount other Unix NFS shares, but sharing PB drives is a bit trickier. This is primarily due to unresolved issues with NFS and HFS+. If you use UFS for the PB, you avoid the NFS issues, but you loose the ability to run the Classic environment, so it's a trade-off. As well, although I have found a reliable method for accessing other NFS drives, I haven't found a reliable method for creating NFS mounts in the PB, so I'm waiting for either Apple, or a third party to handle this.
In any event, the NFS procedure I use is relatively simple. You have to start NetInfo Manager, and unlock it. To unlock it, you must authenticate as root, with root's password. Considering that unlocking NetInfo unlocks the 'keys to the kingdom', this is a sensible precaution. Once in NetInfo, you will want to select the /mounts directory. This should be empty. You will want to create the following properties and values, one for each mount:
- property: value
- vstype: nfs
- dir: (local directory where the remote share attaches, i.e. /Network/public)
- name: (remote server and that server's share, as computer:/sharepath, i.e. server:/home/myhomedirectory)
- opts: bg
A sample NetInfo screen shot is shown below, (names changed to protect the innocent.)
One thing to make sure of is that you create the dir parameters manually. I personally create the destination/local directory before I create the mount entry. Once this is done, then you should be able to access the remote files from within the Finder window. I found that I sometimes had to reboot to see the mount, but this wasn't consistent. I presume this will not be required in the release. Another note is that the remote volume does not show up on the desktop in the same manner as AppleShareIP volumes. I would sincerely hope that Apple allows for this behavior to be selectable. That way, both Mac and Unix people can have the behavior they expect from remote volumes. Finally, because you statically create the destination directory, even if you are not on the network, the directory is still on your Mac, albeit empty. For those administrators with Unix experience, this is the norm, for the Mac administrators without this experience, this will be something new to watch out for. This works very well with Sun Solaris boxes. I have not had a chance yet to test it with Linux or SGI shares. It should work okay, although there does tend to be issues with the way Linux and SGI approach NFS. These aren't insurmountable, but beware of them.
The final connectivity protocol we'll look at is the X Window System. This is the basis for most Unix GUIs, and is also the primary method for running applications that physically exist on other servers in the Unix world. Before we start with X products for OSX, a brief overview of what X is should be looked at. (I would like to thank Sandy Nicholson for this explanation, posted as a comment to an earlier column I did.)
2. An `X server' is a program that implements the X protocol on a given computer, by actually rendering graphics and text for you to see, and by converting your mouse clicks and key presses into appropriate X protocol messages.
3. An `X client' is any program that communicates with an X server (using the X protocol), in order to interact with the user of the machine on which the X server is running. To the end user, it appears as though each X client is running on their machine, though in fact some or all of them could be running remotely.
4. An `X window manager' is a specialized X client, often (but not necessarily) running on the same machine as the X server. It provides basic window management functionality (things like raising and lowering windows, moving them around and iconizing them). To use X, you don't actually need to use a window manager, but it's pretty awkward if you don't!
Okay, so the X-Window app I have been playing with is Tenon Intersystems' XTools. This is a commercial X Window product that allows you to run X applications, such as GIMP, Netscape, etc. on your OS X Mac, or other computers to run X applications that reside on an OS X Mac. For example, there is no Carbon or Cocoa version of Netscape 4.7.X. So, without XTools, I am stuck with trying to use the Classic version, or booting into OS 9 to use Netscape, or trying to run the Carbonized version of Mozilla. But with XTools, I can log into one of our Unix servers, and run the Solaris version, shown below.
So with XTools, or the free port of XFree86, I can run any X application that exists on my network, from Netscape to MatLab, to IDL, and the only slowdown is if the network is running slow. This gives OS X users access to an exponentially higher range of applications that span the entire spectrum of application types, from vertical market high-end applications, to games. Admittedly, this capability existed for OS 9, with similar products such as Exodus, from PowerLan. But it's very important that this capability is available for OS X, as it is based on Unix. Even better than being just able to run remote X applications is the reverse of this. Other Unix computers or systems with X Window packages can run applications that reside on the OSX Mac, if they follow the X Window specifications. So that means that a multiprocessor OS X Mac could easily act as a compute server for a research firm, running such products as IDL and Mathematica. X Window capability simple extends OS X's reach to an incredibly large span, and that can only help not only Apple, but everyone who uses OS X.
So, finishing this installment, OS X has the essentials needed for good Unix connectivity. There's some blots on this, such as the awkwardness of setting up NFS, and I left out the WebDAV capability in Apache, (primarily because I haven't had a chance to really work with it personally), but the plumbing is there, so now Apple needs to get the user fixtures in place.
The next, and final part of my series on OSX connectivity will be on integrating the PB into non-NeXT/OSX Server/NetInfo networks.
OS X To Windows
created 23 Oct 2000
Getting the MacOS X Public Beta to speak Windows
So last time we took a look at how the MacOS X Public Beta Public Beta connects to a MacOS or AppleShare network, and decided that although there are some rough spots, in general, it does a good job.
This time we take a look at how well the Public Beta deals with Windows networks. (Note: I'm leaving off things such as FTP in this article, just as I did in the last one. For our purposes, I'm concentrating on what are considered the 'native' protocols of the target network that MacOS X is trying to connect to.) There are two main products that allow you to connect to Windows-based networks, Samba, an open-source server that implements the Server Message Block, (SMB) protocol. Samba is maintained by the Samba Group. The easiest way to get Samba for the Public Beta is to go to http://osx.macnn.com/features/installsamba.phtml, which not only has a link to the binaries for the Public Beta, but also a nice, concise set of instructions for installing Samba on the Public Beta.
Reading the install instructions brings up one of the annoying parts of the Public Beta, which is the reliance on the command-line. There are some marvelous opportunities here for shareware/freeware developers to take a lot of these products, and wrap them in a proper GUI installer application. On the other hand, the advantages of the Unix plumbing in MacOS X really shine here as well, considering that, for free, you now have a complete Windows server, and even with the command line, the installation is pretty simple. Luckily, thanks to the hard work done by the folks at the Samba group, there is a product called SWAT, which is a Web interface to Samba, and allows you to fully configure Samba from a browser, and avoid the command line completely.
Although there are a lot of options with Samba, the online help is very complete, and well - implemented. Each option on the web admin page(s) has a help button, and the entries explain your options well. This is not to say a home user could use Samba well, even at all. Like any cross-platform server, it requires a solid understanding of things like NT domains, domain security etc. But for an average network administrator, it shouldn't be a problem. (There have been rumors that Apple is considering shipping a version of Samba with the final release of MacOS X, along with an easy - to - use GUI. I personally hope that these rumors turn out to be true, as this would give Apple an OS that can, with a minimal amount of work, fit in to essentially any network, and be completely compatible, which would be not only a nice bragging point for Apple, esp. in the SciTech arena, but also earn them the gratitude of Mac administrators everywhere.)
As a server, Samba works well. You can specify any directory you wish to be shared to Windows users, (mostly due to the fact that to specify any shares, you have to log into the web admin as root, so you can do anything you like, almost.), (dis)allow guest logins, have Samba use an existing NT domain for authentication, etc. It's a very full featured server, and as easy to configure as any other server in that class.
Unfortunately, all Samba is a server. There is a client app, but it's command line only, and is more of an SMB-ized command-line FTP program. So Samba will allow you to share resources from the Public Beta across the network, but it can't act as a client.
To do that, you need the other product for the Public Beta, Sharity, from Objective Development. Sharity is a Common Internet File System, (CIFS) client. CIFS is the latest version of the SMB protocol. Although Sharity is not free, it is available with either per-server or per-client licensing, and the prices are reasonable, ($3695 allows an unlimited number of MacOS X Macs to connect to up to 20 WIndows/SMB servers.), and Sharity is a well - put together piece of software. Although the version I played with is a beta version, it works well. The install process is a nice GUI, although the requirement to be logged into the MacOS X Public Beta box as root is annoying. Allowing you to authenticate as root from within the installer would be nice.
The configuration is GUI - based as well, and once you have set it up, it can run in the background without needing attention. It provides access to all of your Windows PCs, Macs running DAVE, or Unix boxes running Samba, via a CIFS mount point in the Networks section at the root of your MacOS X hard disk. I was never able to get a remote drive to mount correctly, but these look like interactions between a beta OS and a beta version of the product. I expect that these will disappear with time. In any event, Sharity is easy to use and administer, and, for now, provides the only graphical SMB client for the Public Beta.
So, in conclusion, if you only need to act as a server to other SMB clients, Samba is the way to go. It's free, and easy to set up and administer. If you need to act as a client to SMB servers, then Sharity is the way to go. They both get high marks for providing capabilities to the Public Beta that allow it to easily coexist on a network with Windows PCs, or a Windows - controlled network, without requiring any work from the Windows end. All in all, they make MacOS X to Windows connectivity a simple and uncomplicated experience, and once Samba gets a proper GUI installer, the experience will be even better.
Connecting OS X
created 23 Oct. 2000
Connecting Mac OS X to the rest of the world
Well, after spending few more weeks with the OS X Public Beta, I've had a chance to see how it reacts to the rest of the world, namely, connecting to other Unix systems, MacOS systems, and Windows. Allowing for the beta status of the OS, and many of the products, it's been a mixed bag, result-wise, but in general, more good than bad.
One of the bigger downsides, (although this is due to many of these products being quick ports from other Unix products, or from earlier NeXT products, where the command line was not the verboten thing it is in the Mac world), is that you need the command line to install many of these products. While not inherently bad, most Mac users are not going to want to come close to this environment. I am however, reasonably confident that the vendors of these products realize this, and will create an OS X GUI install program that allows Mac users to avoid the command line, if they choose to do so. Admittedly, not many of these applications are the type of things a home user is going to install, but nonetheless, when in Rome, use a proper GUI.
So, since the Public Beta is an Apple OS, how does it connect to AppleShareIP servers? Quite nicely. The way you bring up the Network Browser is a bit different, (It's the "Connect To Server..." item in the Go menu, or cmd-K from the desktop.) This brings up the familiar Network Browser-style window, although it's named "Connect To Server". Currently, only servers that support either the AppleShareIP protocol, (more correctly, Apple File Protocol over Internet Protocol, or AFP over IP), or the HTTP protocol, and even if the server supports those protocols, it needs to be able to use the Network Services Locator, (NSL) protocol to advertise itself to the OS X Network Browser. The other restrictions are that the KeyChain is not usable from the Network Browser, and you cannot yet add remote drives to your "Favorites" list. However, once you connect to a remote AppleShareIP server, and log in, select the drive you wish to mount, it obediently pops up on your desktop, just like in OS 9. From there, you can use it just like you always have.
The only real downside to OS X Public Beta to AppleShare connections is that it can only connect to AppleShareIP servers, (this limitation is clearly spelled out in the various readmes for the Public Beta, so if you are running the Public Beta, and haven't yet done so, go read them, and the other installation notes.) So that means no straight AppleTalk, which keeps the Public Beta from connecting to Windows NT PCs running Services For Macintosh, (SFM), as NT's SFM is using straight AppleTalk. Windows 2000 servers are connectable, as that OS uses the AFP/IP protocol for its SFM. Similarly, for a Mac running the Public Beta to connect to other Macs via File Sharing, those systems must be using the File Sharing over TCP/IP capabilities included in OS 9.X, or, for older systems be running a product such as ShareWay IP from Open Door Networks, or some other product that allows a non-server Mac to use the AFP/IP protocol. In general, I'd give the ability of the Public Beta to connect to existing Mac networks a B+. It's limited by the lack of straight AppleTalk support, KeyChain support, and the inability to keep individual drive mounts in your Favorites.
As far as allowing other Macs to connect to the Public Beta, the results are about the same. Again, it's acting as an AFP/IP server, so most Macs should not have too many troubles connecting to a Public Beta Mac. The Public Beta Mac shows up in the Chooser or the Network Browser of the connecting Mac, just like any other Mac on the network. The only oddity is in what is accessible from the client Mac. If you are not logging in as the System Administrator, then all you can access is the Public folder in your user folder. As an example, if I log into my PowerBook when it's running OSX, and I use my userID, then the only folder I can access is the MacOSX/Users/jcwelch/Public folder. However, if I log in with "System Administrator" as my userid, and either the password for root, or my own password, since I am an admin for that OS X machine, then I can access any and all volumes that are connected and visible to that PowerBook. (Note: This information is spelled out in the Apple Tech Info Library, article # n106010.) While this may seem to be a bug, in an OS designed for multiple users, you do not want a user being able to just traverse the hard drives at will remotely. By locking down normal user access to a specific shared folder, you can help prevent accidental deletions of other users files. Now obviously, there will need to be more flexibility here, so that folders needed by a group can be accessed regardless of location, but I will keep a 'wait and see' attitude on this. In any case, I'll give Apple a C+ on this, as it works okay, but it is a bit too limited in scope to be as useful as it can be.
Finally, as of yet, there is no allowances for an OS X Public Beta Mac to connect to a server running Macintosh Manager, nor is NetBoot implemented in the Public Beta yet. Again, this is a beta, and since there is no server implementation of OS X, (OS X Server being a very different beast from OS X), the face that these features are missing is not surprising.
So Apple has done a decent job of allowing for Mac to Mac connections within the Public Beta. Although there are some rough spots, (AppleTalk, KeyChain), the basics are there, and the work well. Next time, I'll take a look at connecting to the wonderful world of Windows, followed by Unix connectivity, and finally, a look at managing the Public Beta on networks.
Netscape 6 pr3
created 4 Oct. 2000
Netscape 6.0 PR3: Some small improvements, one huge mistake
Well, I just took the time to install Preview Release 3 of Netscape version 6. So, for any of my comments, please bear in mind, it's a beta, and nothing is fixed in stone.
It installed correctly the first time, no crashes. It starts much faster than PR2 did, by about a minute on my 400MHz PowerBook G3 Series '99. The web page rendering is very fast, the text appearing almost instantly images following not long after. It did seem to have an odd problem with one table on my intranet, but that could be the HTML on that page too, so I'm not going to get excited over that. It handles pages from Xerox's Docushare document control product very well, better than Netscape 4.75. In some side by side comparisons, PR3 is sometimes faster than IE 5, sometimes not. In any case the speed difference was less than three seconds for me, so I'm not going to worry too much about that. There are some slowdowns, especially when resizing frames. The redraws are slow enough to watch the frame border redraw two or three times in the course of moving it about 2 inches. Java support is still spotty, with the functionality not as consistent has it needs to be.
The interface is a little cleaner, and feels less cluttered. The colors are easier on the eye, or at least my eye, and it is easier for me to find things in general. The pseudo -integration with Sherlock is nice, although it seems to me it would have been better just to do an Apple Events link to Sherlock, but when you are rigidly cross-platform, you loose good and bad both. Although not a new feature, it also gets rid of Netscape 4.7X's insistence on redrawing the page every time you resize the window. It's also nice to see that Netscape has copied some of IE's better features, such as its improved autocomplete and password manager. There is no KeyChain support, so both browsers still loose to iCab here. The window redraw takes a bit longer than I'd like, but for a beta, it's liveable. PR3 also appears to completely ignore my interface settings as far as scroll arrow settings, etc. I really, really dislike it when a product creates its own interface standard. I'm using it on a Mac, obey my Appearance Manager settings. In addition, many of the Netscape standard key combinations are gone, or changed. So in the address book window, cmd-I doesn't give me information on the selected address book, but rather tries to fire up the instant messaging function. Cute, but cmd-I is always 'Get Info'. Don't muck about with things like that.
There is also no Internet Config support, which, considering it finally showed up in version 4.75, means that once again, you have to maintain separate lists of file and protocol handlers. The AppleScript support is unchanged from 4.75, so it's still very lacking, especially when compared to other applications, such as Outlook Express, Eudora, or even Emailer.
The email client is somewhat improved from PR2 as well, I can finally get a good download on my IMAP mailbox headers in PR3. PR2 would just die at about 1000 headers. Considering I have some IMAP folders with over 4000 headers, that was a bad thing. Header download speed is somewhat slower than Netscape 4.75, but still quite fast, taking about 3-5 minutes for over 4200 headers on a 100Mb Ethernet connection. Reading messages is reasonably fast, although a the screen redraws are a bit jerky. I'm not happy that Netscape doesn't give me an option to not view HTML, or turn off external links in email messages. This is, in one form or another a feature of most other email clients, and is an important one if you are in a secure environment, where unauthorized web connections are prohibited. This ability needs to be added in, preferably before the next PR release. The email message filters, while allowing for many more filtering criteria than 4.75, still only allow for one action to be taken per rule. Compared to products like Outlook Express, Eudora, or Entourage, Netscape's filters are pretty weak.
The address book functionality is, however, broken. There is no capability to search local address books. For someone like me, who has hundreds of entries, and who uses their address book as a contact manager, I need to be able to find phone numbers, addresses, etc. A full-featured find capability in the address book is needed, not just email address autocomplete. But that's not the worst part.
There is no direct connect LDAP capability whatsoever. The only LDAP capability appears to be a one-time LDAP dump to a static LDIF file, but only if you had predefined LDAP servers in 4.75. This is, especially for IMAP or enterprise customers, an astoundingly critical thing to omit. LDAP is the addressbook standard used by too many companies and schools to count, and it's used by many of the public directories on the Internet. In addition, more and more companies are using LDAP as their main employee database, so if anything, it's more prevalent than it was when PR2 was released.
Almost every other email client on the Mac, and any other platform supports LDAP, and indeed, Netscape was one of the pioneers of using LDAP. They made, and Sun took over, one of the best LDAP servers on the market. Especially for mobile, or IMAP users, who may connect to their email from many different computers, LDAP is essential. Considering that every other directory service, including Novell and Microsoft both tout LDAP compatibility if not outright integration, Netscape's removal of LDAP from PR3 is a blunder, and borders on outright stupidity.
The LDIF file dump is unacceptable, as directories change content constantly, so the LDIF dump would have to be done every time you opened Communicator. If you are talking about a directory with 20,000 or so entries, this is going to make starting Communicator an all-morning affair over a modem.
I am also aware of Netscape's online address book features, but a) I don't want to have to join yet another portal to get a feature that had no business being yanked, and b) I don't feel comfortable with placing all my company's address information on an external server, and I am not in a terribly high-security configuration. For companies that are, the online option isn't viable, but LDAP via SSL is.
Between no search capability, and no LDAP capability, unless this is fixed, and quickly, Netscape has just given up the corporate email market, as any IS/IT manager that tries to put this out as the standard email client will at best, get their head handed to them. On a personal note, if this is not fixed, then I will, not may, but will be replacing Netscape as an email client, most likely by Eudora on PCs, OE/Entourage/Eudora/PowerMail on Macs, and I don't know what yet on Unix, although suggestions are welcome, and yes, we already use Pine and Elm.
In conclusion, with the exception of the address book mistake, PR3 is a nice improvement over PR2. Basic functionality is in place, and it's time to speed it up. The lack of LDAP capabilities though, is a stake through the heart if AOL/Netscape want anyone other than strictly home users to ever run Communicator 6. LDAP is just too critical to business and education to be able to function without it. I really hope this is fixed, otherwise, I will have no choice but to replace Netscape with some other email client. And if the only thing I use Netscape for is browsing, then why bother when I can use IE, Opera, iCab, or Omniweb?
Week with OS X
created 26 Sept. 2000
Living with OSX on my network, a week in the trenches, part 1.
Well, I've been living with OSX for just over a week now, and it's been a surprisingly mild ride. In fact, quite the pleasurable one. Just as some background info: I'm running the Public Beta on a G3 PowerBook '99, aka Lombard, with 192 MB of RAM, with an 18GB hard drive, with two partitions, the second one being a 3GB partition for OSX. I had not been able to run earlier developer previews on this setup for any useful length of time. Another important aspect of this is that I cannot, (for various reasons), run Classic on my setup, but I knew that before I installed the Public Beta, so no surprises there. The interesting aspect to this is that I have had to keep all of my applications and utilities 'native', so I am really getting a feel for OSX as more than a carrier for Classic applications.
First of all, as any Unix administrator will admit, Unix is not crash-proof. It's very resistant to crashes, but not immune. And I have on occasion been able to grind the Public Beta to an absolute halt, but it's been consistent enough that I can now submit a decent bug report on it. I've also had the Carbonized version of the Netscape M17 beta kill the whole OS. This is not that uncommon, I've watched Netscape kill Solaris servers as well. However, a reboot, and I'm back in business. I would have to say, running beta applications on a beta OS is definitely the way to have an interesting life.
At any rate, I find that there is very little I miss from the Classic world. The networking speed of the Public Beta is fast. In my own informal tests, I'd say by a factor of five to ten in some cases. Internet Explorer is faster, snappier, and access pages much faster than IE 5 in MacOS 9. I did try OmniWeb, and can see where it is technically a better browser in a lot of ways than IE, but IE works the way I like to, so I use that. Mozilla seems to work, but I can only keep it running for a matter of minutes, so I can't really say for sure if it's faster or not.
The Dock is turning out, for me at least, to be much more useful than I had thought. Even with ClassicMenu running, (the OSX version of the Apple Menu from Sig Software, at www.classicmenu.com), I don't really use it that often. I placed an alias to my applications folder on the dock, and between that, and the browser view in the Finder, it works really well. In addition, I have put around 15 applications on the Dock, so I can get to the things I use regularly with decent speed. I was never a great user of the control strip for a lot of things outside of Location Manager, and setting display resolution, so the lack of that doesn't bother me either. I do wish that I could more easily move it from one monitor to the other, but it's consistent in that it follows the menubar from what I can tell, so at least I know where it's going to be. I have found that the Dock can be very annoying if the applications don't deal with it well. Case in point is Internet Explorer, which will extend it's Favorites list down into the Dock area, making it hard to get to choices at the bottom. I would also like to be able to ctrl-click the trash, and have that give me a context menu, as at the moment, it simply opens the trash up for me. But that has the feel of a beta-bug, not a permanent issue, so I'm not too worried about it.
One of the very nice things about the dock has to do with the fact that it is not static. I know we've all seen the demos with "Mission Impossible 2" playing in the dock, but that's more of a parlor trick. What is useful is the way you can have the CPU usage meter running in the dock, and still easily see how hard something is beating your system. Or the fact that the OSX mail client shows you from the dock icon that you have unread messages in your inbox. Things like these show a lot of potential for the dock if developers take advantage of it. By allowing live displays in the Dock, it also gives it a lot of potential to make up for some of what the Control Strip gave us. I do have to admit to not really liking the new clock. It works, and it's pretty, but it's in the way all the time. However, VersionTracker has a listing for a third party menu bar clock, so again, if Apple doesn't give it to me, someone else will.
This is one of the more interesting areas of OSX, what *isn't* included with it, and I find the choices interesting. So my thoughts on it follows this line. A lot of what we take for granted in OS9 actually started life as shareware. Things like SuperClock, WindowShade, being able to use Location Manager and the Control Strip on non-PowerBook Macs, most of these were bought, or licensed by Apple. But in X, they are all gone. Now, (and this doesn't count Location Manager, I'm more than a little sure that it is on the way), by doing this, Apple does two things. First of all, it creates huge opportunity for interface shareware developers, much like existed in the System 7 days. Secondly, it gives Apple a chance to re-evaluate a lot of the OS utilities, and see which ones are important to users, how many users they are important to, and why. This way, Apple can make better decisions on what to keep from the Classic OS. It's easy to assume that Apple has just killed all of this stuff off, but if you look at it, this is the first time in years, maybe ever, that Apple has had a chance to see, in the 'real' world, exactly what their users want from the MacOS. The Public Beta gives them that chance, and it will be much more useful than user surveys ever are.
The Desktop and the Finder are other areas of concern for Mac users and rightly so, as in the end, these two things, more than anyone else make the Mac what it is. From my experiences, I don't think that Apple has gotten rid of either, they've just altered them a bit. Where the Finder and Desktop were once thought of as one and the same, now they are more separate to the user. From what I can tell, there are not a lot of differences that are all that radical.
For starters, the Desktop is still functional, with the only differences being the fact that internal hard drives don't automatically show up on the desktop, and the Trash is in the Dock. I've got all kinds of folders and applications living on my desktop, happy as can be. I was able to move things like Disk Copy to the desktop, again, just like in Classic. Note: Move, not alias or copy. The actual Desktop folder exists in a sub folder of my User folder, but this is no different than when using Multiple Users, or Macintosh Manager under OS9. Your personal files and preferences are kept separate from the master set. This is good, as that way, it is harder to do real damage to your system. I also think Apple has done a masterful job on simplifying the directory structure *especially* compared to DP4. While it still needs a few tweaks, like maybe moving the root Library directory under the root System folder, which would then give you a directory structure very close to an OS9 Mac running Multiple Users. Aliases seem to be a bit more spotty, mostly working the way we are used to, but in some cases, breaking if you move the original, most notably in the login items tab in the Login control panel. Again, I think this is more due to "It's a Beta" than "Apple's dumping Aliases". Another interesting change is the absence of the 'Put Away' command, (cmd-Y) to dismount network drives, among other things. Instead, you use the 'Eject' command, (cmd-E) to accomplish this. I'm not sure if that's good or bad, but for me, it's probably closer to bad. Again, this strikes me as a "It's a Beta" detail.
The Finder hasn't changed that much either. If you use cmd-B, or Hide Toolbar from the View menu, and stay in list or icon view, it still looks very similar to current Finder windows, with some added buttons. I have not found, in a week of almost constant use, any case where I hit the wrong button to close the window instead of minimizing it. The window controls are spaced well apart, and you have to click on the button to activate it, not just near it. This includes rollovers. You can in fact activate the rollover effect, yet not have the active area in of the cursor in the clickable area of the button. So you are either on target, or nothing happens. This is acceptable, as the buttons are big enough to make targeting easy, yet not so big that you can easily hit them by mistake.
I also have to admit to really enjoying the browser view. Especially for someone like me, who does have very deep folder hierarchies, it's nice to be able to scroll back to the parent folder or hard drive without having to close/open windows. Once you get used to it, it's quite a bit faster for navigating folders and finding things. One thing I do *NOT* like is that you can only move windows via the title bar. This can get quite awkward, and makes things harder than they need to be. I'm also not thrilled with the menubar acting as a ceiling, but only on the main monitor. If you are not going to let us slide windows up past the top of the screen or menubar on one monitor, then eliminate that behavior for all monitors. On the other hand, since you can only move windows by the title bar anyway, you aren't moving them too far up. The one really curious window behavior involves what happens if it gets hidden behind the Dock. There's a nagging inconsistency there. If the application in question is the Finder or Internet Explorer, you have to hide the Dock to get to the window, or just give up, close the window, and re-open it. If it's the email application, then the title bar pops back up above the Dock as soon as you release it. TextEdit leaves it alone, until you select the window from the Window Menu, at which point it pops it above the Dock. I like the Email application's method for new users, and TextEdit's behavior for experienced users. In any case, 'losing' a window behind the Dock is not an acceptable mode, and I hope that however this is dealt with, it's dealt with at a system, not application level, at least for the basic behavior.
Those of you who hate the brushed metal look in Sherlock will be happy, as that is now gone. QuickTime still has the metal look, with all it's good, (easily draggable), and bad, (hard to tell it's in the background) features, although the older favorites drawer is replaced by a button that says TV, which is less intuitive than the old version, as I wouldn't expect my favorites to be in a tab in the TV button's window. The stereo controls are gone, and the volume slider is an actual slider now, not that pseudo-wheel it was.
In any case, there is a lot to like about OSX, and a lot more than just Aqua, although that's what I've covered here today. Next up, a look at the more geeky parts of OSX that will make an administrators heart beat a little faster.
created 30 Aug. 2000
The Mac OS X Public Beta
So, with the announcement that the public beta of OSX is available on September 13th, Mac network administrators are going to be thrown into the same swamp that Windows and other platform administrators have had to deal with for a while now...how to handle users with a public beta of an OS.
The most conservative reaction is to ban it completely. While safe, I think this would be a mistake. OSX is going to be a part of your life, whether public beta or golden release. The beta is a perfect chance to not only do network level tests, and compatibility testing yourself, but to also get firsthand experience with users issues that will crop up.
I am going to go the route of finding a smallish number of Mac users, with the knowledge and need for OSX. The reason for this is simple. No matter how long I test something, my tests are going to be biased by my needs and uses of the system. I don't use my PowerBook the same way one of our VPs uses theirs. I won't do the same things they will. They will find different bugs, or issues than I would.
So how does this help?
Well, for one, it gives you far more opportunity to see how OSX is going to change things in your environment. It will also let you know what kind of user training will be required to make the transition to the new OS as painless as possible. If you manage to include a power user or two, you will also get the joy, or pain, of seeing how various programs and system modifications work, or do not work. You'll also have the best kind of test data for any sort of infrastructure changes that OSX may require...first hand.
You'll also find out exactly what you have to do to sell the upgrade to the people in charge at your company.
Don't underestimate the effect that a good sales strategy will have here. I was able to sell the upgrade to OS 8.5 and later 9.0 based more on our being able to easily view Chinese web sites correctly without reconfiguring the browser than any other feature. Why? Because we do a lot of business with Hong Kong, and other Asian countries. Finding a feature that makes your business run easier is always a big hit for any software, especially an OS.
But how do you go about keeping people from just installing a beta en masse?
Well, first of all, don't try to hide its existence. You won't be able to anyway, and you'll just look foolish. Instead, send an email to your Mac users, telling them about the beta, and that you'd like to set up a small group to evaluate it for your company. Have a clear list that has feedback requirements, and large, nasty warnings that a beta OS, even a stable one, can do bad things to your programs and data. Make sure they understand that bad means 'gone forever' in the worst case.
That should weed out the more casual users. Once you have gotten enough feedback, then inform them that since this is a beta, again, they need to be willing to commit to regular meetings, and other feedback on bugs or errors they have noticed. Make sure they understand that this will take a few hours a week from other things, as they will be spending a lot more time dealing with the OS than they are used to.
This should weed out a few more folks, and help you get the group to a manageable size. Now, you have your small, elite, group of people ready to do a dangerous thing. Play that up. These folks are not normally used to doing beta tests of the OS, so make them feel like they are doing a dangerous thing that is also very cool. Set up a custom email address for the group with a cool name, (think along the lines of the X-Men if you can't come up with something else). I'm not saying go buy t-shirts and buttons, although if you want to, by all means, go ahead.
What I am saying is make this new OS be something so cool that everyone wants a piece of it. Make it the news of the IT department. As time goes by, and bugs are fixed, add more people to the group, with different levels of expertise and experience.
In the end, you will gain far more than just the headaches of sheparding a beta program. You'll have live, firsthand knowledge of what you need to do as an administrator to get your company ready for OSX. You'll have months of experience dealing with the networking and communications changes that OSX brings. You'll also have a group of OSX 'power users', who will, in their own way, have as much knowledge about the OS as you do. So you'll have an extra layer of user support, without the budget. You'll have months to fix any in-house software that OSX breaks. You'll have avoided the agony of random users installing OSX, and then screaming at you for letting them do that. You'll know what works, and what doesn't, and you'll know this because you'll have seen it firsthand.
And with a little luck, you'll look like the visionary that you already knew you were, and your users will see you, and themselves the same way.
And won't that be handy when it comes time to upgrade your Macs?
created 9 Aug. 2000
Netscape PR2...not even close
Well, I've just finished playing with both the M18 release of Mozilla, the open source pre-release version of Netscape Communicator, and PR2 of Netscape version 6. In the case of PR2, attempting to play with it would be the better choice, as I never actually got it to run. On the other hand, M18 installed fine, started up, and acted like a somewhat functional beta. Unfortunately, I still cannot add new LDAP directory entries for email address lookups, and since my company uses LDAP for our corporate address books, I really can't do any email work with M18.
But as I was sitting there, staring at what has to be the most horridly non-mac interface of any application I've dealt with in years, I suddenly realized what was making me so mad about Netscape. It wasn't that the beta didn't work right, although in my testing experience, a beta is feature complete, yet buggy. If the features aren't all present, and basically functional, then it's an alpha, and a public preview release should at least start up without causing Macsbug screens. But that wasn't what was making me angry, something else was.
The way they've treated their Mac users over the last few years.
As a network administrator, I've spent more time dealing with new versions of Netscape across three platforms over the last year than Windows patches by far. Now, thanks to yet another bug in Communicator, I have to tell my Netscape users on three platforms to disable Java, because Netscape managed to make it into a whopping security hole. So now, my Netscape users have to deal with a large loss of functionality until Netscape releases the patch, and we can get it installed.
And yet, in the Mac community, Netscape has this exalted place as the leader of the fight against the Microsoft Monster.
Why? Why do we support a company simply because they exist?
Oh certainly, they had a good product at one point. But what has Netscape done for the Mac in the last year? Their Java implementation on the Mac is a joke. Everyone else on the Mac is able to use Java that runs with the 1.1.8 JDK, Netscape is still back at 1.0.X. It's so bad that if you use the document control features with Netscape's web server, you can't browse your hard drive to find the files you want to upload. You have to enter the path. Not only that, but after 4 releases since version 4.7, and many more since version 4.5, Communicator *still* doesn't support Apple's MRJ java implementation.
But that's not all.
Netscape ignores Internet Config/Internet Control Panel settings, so even though I spent many hours creating a Internet Preferences file that is perfectly tuned for our users, for Netscape I have to manually tell it things like how to treat Word files, not to use Sparkle for MPEG movies, etc. It still can't handle multiple POP accounts unless you create multiple user configurations, one per POP account. It has essentially no XML support, and feature-wise, it can't even match the Windows version much less anyone else's browser. The AppleScript support for the Mac version of Communicator 4.7.X is horrid, and I didn't see a point in looking at the PR2 version. Their menus still don't comply with the Platinum interface, which has been the standard for around three years now. They still don't support Navigation Services, which has been out for the last two years or so. If I'm reading email in Netscape, and click on a link, there is no easy way to tell Netscape to always automatically use a different web or FTP application. The list goes on.
Here's a company that had an absolute lock on the Mac browser market, and now the Mac version of their shipping client is the worst one they ship. And when my Mac users complain, my answer has been, until now, "Well, Netscape 6 will fix this."
No more. There's a company that is shipping the best XML browser available, it supports Internet standards better than any browser that isn't in beta, it's smaller than Communicator, needs less memory to run, integrates better with the MacOS, and crashes less. This company also is shipping a better email client that has some of the best AppleScript support of any email client, and they are about to release a full-featured email client/PIM as part of another product that is just fantastic. Who is this company?
Yes, I know they are the evil ones, Bill is the devil, I've heard all the rants. But when it comes to their Mac products, the rants are hollow. I did some checking on my hard drive, and right now, the combined size of Internet Explorer 5, Outlook Express 5, and AOL Instant Messenger, (the three products that essentially give you all the features of Communicator) is less than the size of Communicator 4.7.X . I've also realized, from talking to the Microsoft developers, that the Mac Office, OE, and IE development teams are as big a group of hardcore Mac heads as you will find anywhere. They love the Mac platform, and are on a mission to write the best Mac applications in whatever market they are writing for, from email to word processing. And it shows.
Internet Explorer has the best XML support of any commercial browser, it's also fast enough that the differences between it and Netscape are minor, it uses Apple's MRJ for Java, which while not current with Sun's, is better than Netscape's by far. It uses Internet Config settings, has allowed me to auto-fill online forms since version 4.5, and is more stable for me and my users than Netscape. It also maintains my browsing history even after I close my current browser window. It tracks my online passwords for me, and allows me to view and edit the ones I have it tracking. It may display HTML marginally slower than Communicator, but it's other features more than make up for that time by the end of my day. In short, it's a better Mac application.
Outlook Express has excellent Applescript support, phenomonally powerful filters, handles multiple POP and IMAP accounts with ease, supports LDAP, and while not as fast as Netscape for certain IMAP functions, has more features that my users and I need, and uses less RAM than Netscape. Outlook Express is simply a better Mac application than Communicator.
Their upcoming Entourage product, which I have been testing, is honestly a joy to use, and it beats Netscape in too many ways to list here. It, like Communicator 6 is a beta product. Unlike Communicator 6, it isn't over a year late, it is feature complete, it conforms to MacOS interface rules, (yes, I know about skins, but I find it annoying that I need a third party product to make a Mac application look like a Mac application. Also, I have to deal with many more people than just myself.) and it has a PIM feature set that is one of the best I have ever used. It supports Internet messaging, news and calendaring standards, (it's amusing to note, the only way Entourage, can connect to a Microsoft Exchange Server is via Internet protocols such as POP, IMAP, or iCal.) It also handles Netscape's vCards better than Netscape does, gives me almost unlimited ways to view my email, and the AppleScript support is, not surprisingly, excellent. This is unlike Netscape, whose AppleScript support for email functions is so bad as to be better off being removed completely. Entourage also has Palm support for its calendar and contact list. Netscape has yet to be able to get any Palm support into any Mac version of Communicator. But the Windows version of Communicator has it. Once again, Microsoft has made Entourage a better Mac application than Netscape has done with Communicator.
The point to all of this is, Netscape's free ride, at least in my small domain, is over, done, ended. No longer will I make excuses for a currently bad product, hoping that the next version will work better. I certainly don't do it for Microsoft. They want to be my recommended product, they now have to earn it like everyone else does.
As an example, until recently, Outlook Express couldn't sort IMAP email into folders that weren't local folders. I helped beta test that product, but I couldn't use it without that feature, and was quite up front in telling the OE team why. They fixed that, and I started using it.
If Netscape Communicator 6 turns into an excellent Mac product in all the ways that are required for a Mac product to be excellent, and is better than the other products available, I would have no problem in using it, or recommending to others that they use it. But right now, I can't do that.
Netscape cannot succeed simply because they aren't Microsoft. Frankly, if that's the best reason they have for being, they don't deserve to succeed. I'm not going to use a product because of what it isn't, or who the company that makes it is not. The product either helps me and my users do what we need to do better than its competition, or we won't use it, no matter who makes it.
I have no company loyalty, I have no product loyalty. I use OE and IE, because for the way I work, they are the best products out there. If iCab or Opera release a browser that is better for me than IE, I'll stop using IE. Simple as that. If Netscape 6 turns out to accomplish those goals, then they will have earned their place as my web and email applications of choice.
Until something better comes along.
X for X
created 25 July 2000
Well, as a network administrator at Macworld Expo, there are times when I can feel like the proverbial fish lacking water. but there are always products that catch my eye, and some that even make me feel like someone out there is listening to us. This year, if I had to pick the one announcement that received almost no press, and yet is of critical importance to OSX, it would have to be Tenon Intersystems' announcement of an X Windows package for OSX.
Even though it's not available now, (then again, neither is OSX), this was an announcement of major importance to OSX.
Being based on Unix, there is a natural synchronicity between OSX and other flavors of Unix, such as Solaris, Linux, AIX, and Irix. But until this announcement, OSX was isolated from its Unix brethren due to the lack, (some say shocking) of a commercial quality X Windows package. (Yes, John Carmack from Id has done a good job of creating a Darwin - based X project for the Open Source community, but Darwin is not OSX, only a part of it.)
But why X Windows? Why is this so important?
In simple terms, X Windows is one of the things that make Unix, well, Unix. X allows Unix users to easily share graphical applications and other resources with other X Windows users on the network, regardless of platform.
As an example, at my company, we use iPlanet's email server. If I need to perform some administrative task on that server, I can use an X Windows package on my PowerBook to access the Solaris box running the server, and run the server administration program natively on the server, but with all the display and mouse information displayed on my Mac. This means I don't ever need to worry about having a Mac version of that program, as long as it runs under X , I can use it just as well as if I was sitting at that machine.
In other words, X allows the remote access and execution of graphical applications across a network by any computer, regardless of OS or hardware, as long as that remote computer has a compliant X Windows application. For those of you using Citrix's Metaframe product with Windows Terminal Server, they are quite similar. Citrix has done some serious refinements to the X protocol, trimming bandwidth, allowing for non-TCP/IP networks, and making security an inherent part of the system, but at its heart, Citrix is basically X Windows for NT/Windows 2000.
By having an X package available for OSX, Tenon ensures that OSX will not be left on the sidelines of the Unix world, but will be able to play on an equal basis with all the other Unix variants. By further allowing OSX to be an X Windows server as well as a client, Tenon is ensuring that any Unix user will be able to make use of the power of Apple's OS and hardware, not just other OSX users. So this way, if a company had a number of multiprocessor G4 boxes, by installing Tenon's package, and BSD-X applications on those boxes, someone using Linux or even Windows would be able to run those X applications, and leverage the power of Apple's hardware.
So far, the ability of Tenon's product to remotely serve native OSX or Carbon applications, (i.e. non-native X WIndows applications) is not known, although I would be surprised if the first release did this. This is not to say it would be impossible, but there are some technical issues with getting Aqua to accurately display on platforms that don't have the Quartz system as part of the OS, and translating Aqua to X is not a minor issue. Going the other way, it looks as though Tenon is able to make X applications being run on an OSX box conform to the Aqua behaviors, i.e. window controls, the Dock, etc.
Tenon is also including X development libraries, to make it easier for OSX developers to create their own applications that can be run via X Windows.
A full X implementation for OSX is an important step for this OS. It is a primary, and critical way to really show the world, especially in those areas where X is of crucial importance, such as upper education, and science/technology computing, that Apple is really coming out with an OS that is a real Unix - based OS, and that it will play nicely with other Unix platforms, while losing none of what makes the Mac so special.
OS X Security Concerns
created 25 July 2000
OS X Security Concerns
Last column, we took a quick look at some of the advantages that OSX gives the network administrator, particularly in the security area. This time, we are going to deal with the dark side of security, namely the new, fun ways that OSX can potentially hurt you if you aren't aware of security, and security issues.
The first danger has much to do with OSX's increased feature set, and how a Unix box can be manipulated on a network. Although Mac users have been networking heavily since 1985, Unix takes network operations to another level.
For one, in the world of Unix, the only difference between a networked hard drive and a local hard drive is the name. Functionally, you use applications on a mounted network drive the same way as you would use applications on a local drive. In some cases, the directory that you run most of your applications from may not even be local, and in larger Unix installations, your home directory is on the network as well. The local hard drives are used for swap space and the operating system.
So far this is not too terribly different than the way Mac users do things, especially with things like NetBoot, Macintosh Manager, etc. The next part of the network equation is very different, and is where the Unix layer of OSX has a real ability to severely hurt a network.
It's the Remote Procedure Call, (RPC)
RPC is a way for Unix, and other operating systems to 'farm out' jobs to other computers on a network, so that large jobs can be handled by multiple machines at once. RPC was started by Sun Microsystems, and is now a standard way for computers to programmatically communicate with each other.
Essentially, there are two parts to an RPC program. The local part is the part you start, and contains the code that does what needs to be done locally, as well as the locations of the remote machines that are going to assist in running the program. The local RPC application passes data to the remote programs, and waits for the returned data. (NOTE: This is a drastic oversimplification of what really goes on with RPC.) The advantage of this is that you can have a relatively small RPC local stub program passing information to much larger remote programs on tens, or even hundreds of machines, and let them work on the data for you. Very efficient, and a nice way to make use of spare CPU cycles.
Especially for crackers.
If a hacker gets root access, (root being the 'superuser' or owner of a machine in Unix-speak), then they can set up RPC programs that can do, well, anything they want them to, from sniffing passwords, to copying data, to being used as launch pads for other activities. RPC programs aren't scripts, or applets, they are compiled code, and can have the capabilities that any other compiled application can have.
So how do you prevent this from happening to you?
First of all, don't turn anything on that doesn't need to be on. If you have an OSX Mac that doesn't have a specific need to use RPC, don't turn it on. The same goes for FTP, Web services, File Sharing, etc. If there is no door, the lock can't get picked. Yes, this will mean you aren't exercising the absolute full capabilities of your network. This is due to one simple principle:
Security and inconvenience are directly proportional to each other.
The most secure computer in the world is one that's turned off. But it's also rather useless. Now, in a lot of cases, you will need to have certain services, such as RPC, or FTP available for use, so the next level is how to make sure that the only people using those services are the ones who are supposed to.
The answer: Password policy
First of all, root password should *only* be given to those with an absolute clear need for it. 'Need to know' should be the only reason here, not position, not friendship, not anything else. Secondly, all passwords, not just root should be changed regularly. I recommend a minimum of every thirty days, but you may want to change critical ones, like root more often. In some heavily classified facilities, it is not unheard of for certain passwords to change daily. Other times to change root is when anyone who had root leaves the company, or transfers to a position where root is no longer needed, or when a remote site is added or dropped on or off of the network.
Third, enforce good passwords. This means a minimum of eight characters, with at least one number and one non-alphanumeric character, no names, no common words. Something like 'Hyrg8*_Z' is a decent enough one. Make sure that when passwords are changed, they can't be 'changed' to the exact same thing they currently are, and in some cases, keeping track of the last year's worth of changes is not a bad idea.
Finally, get the message across that writing down, or giving out passwords is A BAD THING. If you read about crackers like Kevin Mitnick, you realize that he did a lot of his best cracking by calling up an employee of a company and saying "This is the IS department, we're running a check of the system, and need to verify your password. Could you tell me what that is so that I can make sure it's correct?" No long, sweaty hours over a CRT, just five minutes, and a believable story, and he had an access to a network. Janitorial staffs have been also been a great way to gain access. A cracker will hire on with a company that cleans for a company they want to crack, and a little astute observation of sticky notes on monitors, and boom! Instant access.
There's still more to talk about on this subject, such as command - line issues, and shell scripts, etc. But this is a good start. I highly recommend that any Mac administrator familiarize themselves with Unix security issues now, before you are doing it at 3am while dealing with a break-in. A lot of security measures will not win you points with your users, and they are very inconvenient. But it beats having a hundred G4s become the local cracker supercomputer cluster.
OS X Networking
created 15 June 2000
Networking in the Public Beta
With the public beta of OSX fast approaching, there are more than a few network administrators asking, "What will OSX do for me ?" Well, the answer is, quite a lot.
At the OS level, OSX fixes a number of annoyances that have been a part of the MacOS for a long time. The first one is the limitation on the number of active network interfaces. As it currently stands, you can only have one interface per protocol active. This means that if you have two ethernet cards, TCP/IP can only use one of them, the same for AppleTalk. Now, you could have AppleTalk on one card, and TCP/IP on the other, but still, you couldn't have both cards running TCP/IP and AppleTalk. The only way around this is to use a third party product, such as IPNetRouter, or SoftRouter, or to use AppleShareIP, (which only lets you get around this with AppleTalk, not TCP/IP.) This ability to use multiple network interfaces simultaneously is called multilink multihoming, and as I said, the lack of this ability has been a severe limitation of the MacOS for as long as it has had networking.
(Technically, this is not a limitation of the MacOS networking subsystem, aka Open Transport. Open Transport does allow you to have multiple active interfaces, otherwise things like SoftRouter and IPNetRouter wouldn't be able to work. More accurately, it is the AppleTalk and TCP/IP control panels that don't allow you to do this.)
OSX fixes this. With OSX, the user interface for the networking subsystem will allow you select multiple network interfaces and give them their own TCP/IP addresses, subnet masks, etc., and they will all work. Better still, you will be able to set up OSX to forward IP packets between interfaces, allowing it to act as a very simple router. So, for administrators wanting to set up the server version of OSX, you'll be able to set up, for example, a Gigabit Ethernet or ATM card, and set it to only communicate on a server subnet, where you would want clean, high-speed connections. You could then set up another Gigabit card, or a 100Mb Ethernet card so that Classic MacOS clients, as well as OSX clients could talk to the server OSX machine. This has been common practice with Unix, Windows, and other servers for years, and now the MacOS gets this as well. So, with OSX, you can have as many network interfaces as you have slots to stuff them in.
Another benefit that OSX brings is in security. The BSD/Darwin layer comes with the standard Unix security capabilities. For the network administrator, this is a huge benefit as the BSD underpinings give the admin better security capabilities than the current MacOS.
As secure as the current MacOS is, it's an accidental kind of security. Lack of a command line makes it by default, a very secure platform. But accidental security is not the same as deliberate security, and here is where OSX is far more capable than the current MacOS.
OSX, due to the BSD layer, has far more granular security capabilities than the Classic MacOS. You can apply separate permissions for the owner of a file or folder, the group that owner belongs to, and the general public. You can set not just read/write privileges, but execute privileges as well, so even if someone can see a file or application, they can't run it. You can apply different permissions for the directory that file is in, so, in a highly secure facility, the person who creates the file wouldn't be able to access it once they were done with it. So mobile workers can take laptops home, and not have to hide things from small children, as the child won't be able to touch it without the worker being logged in. While third party add-ons to the current MacOS give you this ability as well, with OSX, it's a part of the OS.
A further security advantage is that the primary user doesn't have to be the owner of that machine. Although first introduced with things like At Ease, MacOS 9's multiple users, and other third party products, OSX again, makes it a normal way to work, not an abnormal way. So in a corporate/educational setting, the owner of a given OSX Mac is going to be 'root', and everyone else logs in with various capabilities depending on the need. This is a wonderful thing when the person who has been working on a major product quits abruptly, and no one has their password. With OSX, not a problem, just have the administrator log in as 'root', and change the owner of the files to the person and/or that needs them.
A final security advantage is logging. Right now, even as secure as the current MacOS is, if you have physical access to a Mac, even Multiple Users takes about 5 minutes to bypass, and then the entire machine is yours. Worse yet, if the cracker in question is reasonably careful, they could copy every bit on that hard drive, and unless they left something out of place, no one would ever know. However, in the Unix world, there is an ability to log, not only major system events, but in a high-security configuration, you can log every action taken by every user, including 'root'. This means that you can easily track anyone's actions on not only an individual machine, but on the network as well. While this may sound disturbing from a personal freedom point of view, if you are the one in charge of making sure that your company's data and hard work stay that way, it's a serious advantage.
Obviously there are a lot more security features and capabilities in OSX than I went over here. There are also more security issues that need to be dealt with in OSX that I'll go over in my next column. But hopefully, the administrators out there who have maybe been a little nervous about OSX and it's Unix link can start to breathe easier, and feel better about what OSX will do for them instead of to them.
created 6 June 2000
Pondering a Microsoft Breakup
With the newest wrinkle in the Microsoft case, and writing this on the 56th anniversary of D-Day, I sat for a few moments and wondered a Microsoft breakup would mean to the Mac network administrator.
What would be the direct and long-term possible impacts from a two-paned window named Microsoft?
Short - term, not much. The Macintosh Business Unit is a big moneymaker for Microsoft, and one of the few divisions they can point to as an example of how they aren't a Windows-only company. In addition, the MBU gives them experience in one area that they may need a lot of if the breakup survives appeal: Writing applications that aren't a part of the operating system.
Think about this from Microsoft's point of view. For years, pretty much every Microsoft Windows application is not so much a program that runs on Windows as an extension to Windows. The Office programmers have been able to write for APIs that no one outside of Microsoft gets to see, and if they need to maybe have the Windows API altered for Office, well, no problem, make an internal suggestion, and see what happens. Even if it didn't work that way, the Office coders had access to the filesystem and OS at almost hardware levels. No other Windows application maker has that level of access, that easily.
And it's hurt Microsoft in a lot of ways. Because Office works as part of, instead of on top of Windows, it's been easy to get really good speed out of it. But the Office for Windows programmers have also gotten lazy with this. The best example is Access. There is no way that Microsoft is incapable of porting this application to the MacOS. The explanation that the Mac market won't support an application like Access is just as inane, as proven by FileMaker's deal with the Department of The Interior. So what's the deal then? Why isn't there an Access for MacOS, and along with it, a current version of Project, (which uses modified Access databases for its file format) for the MacOS?
Access, in its current form is so tied to Windows, that to do a quick port would require porting a huge chunk of Windows to the MacOS. Access just runs at too low a level to be a cross platform application. This means that if the Office folks can't get that juicy low-level Windows access without giving those same APIs out to the rest of their developers, and at the same time that the Office coders get them, they have two choices: Freeze Access where it is, and create a new, different low-end database, or, do the rewrite, get it above the OS, so that you don't have to give away the APIs for the low-level hidden Windows stuff. In either case, if you unlock Access from Windows, getting it onto the Mac is much easier.
Long - term, I think you would see most of Microsoft's applications needing the same type of rewrite. And this is where the Mac community could see some huge gains. Because guess who the current experts in getting Microsoft applications to work at a normal application level are?
That's right, the Macintosh Business Unit.
So now, things are looking a little better all of a sudden for the Mac, from the perspective of more Microsoft apps. Because they already know what issues are involved in running beasts like Word, and Excel when you don't have intimate access to the OS.
And from the administration view, there's some nice potential there as well.
Face it, for a company like Microsoft, the Linux market is a swamp. They may do okay, they may get corporate malaria from it. There's no way to ensure that the Linux you are coding Word for has anything to do with what it gets installed on. Filesystems, windowing environments, hardware, none of these things can be assumed with Linux. This is why you see so many commercial Linux products limited to a select few distributions on specific hardware, or like Corel, you run your own distribution for your apps to run on. No matter how you look at it, for Microsoft, it's not a pretty sight.
But OSX? Ah, there's a horse of a different color indeed, and it's green my friends. Here you have a Unix - based OS, with consistency that only a corporate admin could love. You know what your base hardware is going to be, the GUI is a thing to bet on, no worrying about the kernel version du jour, and it's based on a lot of open-source work!
All the benefits of Unix, and none of the headaches of Linux.
So now, the network management folks at Microsoft need a place to expand into, because WIndows is no longer locked tight for them....hmmm...What is the next logical target? A platform with horsepower, and a good user base, one that appreciates efforts towards ease of use and integration, one that Microsoft has experience with, yet isn't a threat to Windows?
What's that Trigger? M-A-C-O-S-X you say?
Again, these are nothing but opinions and predictions, and anyone can do no better or worse with a bowl of chicken livers. But the capabilities are there, the abilities are there, and the Mac has long been a fun little lab for Microsoft. And, if I had to decide on an additional platform for SMS, SQL*Server, IIS, Exchange, etc, because I needed to create a new revenue stream for those products, MacOS X would be looking pretty sweet. Using stuff you already have is always more effective than reinventing the wheel.
In any case, the computer industry hasn't been this much fun in a long time, and I think that Apple, and MacOS X have a good chance to reap some handy benefits from a Microsoft breakup.
created 28 May 2000
Network compatible applications wanted
One of the most frustrating things as an administrator is to get a call from a user because an application, extension, or some other piece of software isn't working correctly or as advertised. What doubles this frustration is to discover that the malfunction is caused by the software not dealing well with networks.
In this day and age, I find it inexcusable for any program, especially one designed for the workplace, written in the last year to not be fully network friendly. By that I mean applications and software that work on Macs that are running other pieces of software, and performing network functions. Yet time after time, version after version, I see software that just doesn't work well on a network. Some of the problems with PowerPoint and AppleShareIP are an example. As L. Carroll once wrote, "The time has come, the Walrus said..."
It must be assumed that any Mac is multitasking, and I think that we can avoid getting bogged down in the relative quality of how it does this. If you are writing any application, assume you are sharing CPU and other resources with email programs, web browsers, instant messaging software, etc. (Even if you write one of these, don't assume the other company's stuff gets tossed. I have at least two to three versions of web browsers, and email programs around for testing purposes) That means you have to go through extra hoops during coding and testing.
It means that if you need the user to open a file, that you have to use Navigation Services from the beginning, because you cannot assume that the file will be on a local hard drive. If your application doesn't use Navigation Services, it's time for that update. It means that if you are going to create a dialog, not only must it not stop the CPU until the dialog is dismissed, but the user has to be able to move it out of the way, as it may be blocking the information that the user needs to accurately do what you are trying to get them to do. It also means that you have to create dialogs that can be ignored until the user comes back to that application, as your application may not be the most important thing they need to do.
It means no hard-coding font numbers, no assumptions about what resources a particular Mac may have when the application is launched, because in the case of a laptop, Location Manager can completely alter these things while your application is being run. Laptops bring another issue to mind, and that of power management. This means that you have to play nice when the laptop is put to sleep, or when a battery notification is sent out. So if you are blanking out the menubar, you need to let the user know somehow that they are about to run out of battery. You need to code with an eye towards the CD drive not always spinning, or even not even being a part of the computer all the time. It means setting up your installations to be remote - install aware, for folks using netOctopus or FileWave, or the Apple Network Administrator Toolkit. If you write installers, it means that you make it easier for remote installations to be set up.
It means that you don't make assumptions on the locations of files, or that the user has the rights to alter certain files. Multiple Users is a heads up for what will be happening in OS X. No longer can you assume that a single user means a single Mac. The person using your application may need to get to it from many Macs. You need to take this into account.
It means that you don't lock out resources or memory that you don't need, and that you get religious about checking for memory leaks. It means that you don't code for any fixed CPU time other than what you happen to get at that moment. It means that you make proper use of the capabilities of the Thread Manager, or the MultiProcessing libraries, using appropriate gestalt calls to determine what these are. It means that you don't use undocumented calls that may disappear, or use features that are going to go away in the near future. It means that you have a full scripting implementation, so users can use your application in ways you never dreamed of, which has the byproduct of making more people want to use your application.
It means that when the OS tells your application to go into the background, that you not only do so quickly, but also relinquish CPU time. No fair eating 80% of the CPU when you're running in the background. It also means putting realistic RAM requirements in your ads, and in the Get Info box. No dividing real world by three to get normal and by six to get minimum numbers.
It means a lot of work for developers and testers, and that's a real hard thing to justify sometimes, but here's the benefits of this work. It means that users like to use your application, because it just works. It means that regardless of the resources available, as long as your minimums are able to be met, your application will always work as advertised. It means that network admins like me don't groan and curse your name at the mention of your product. It also means that people doing reviews of products write the kinds of reviews you *want* to put on a bulletin board. In other words it means you get to have the kind of application everyone wants to use, because it's cool, and it just works.
I'm not going to pretend these things are easy, but they are correct. We live in a distributed, multitasking world, yet I still see applications written with the assumptions that we are all still using the System 6 Finder, and that multitasking and networks are only for UberGeeks. But again, if you do all these things, or at least make a better attempt to do these things, I can promise you one more thing. The kind of reward that comes from having a great application, and the respect, and sales that come along with that.
Think about it....
created 25 May 2000
The 2000 WWDC
Well, after spending a week at the Apple World-Wide Developer Conference 2000, I think there is a lot to be happy about for network administrators. WIthout going into specifics, (because I have this obligation not to break NDAs, and personal trusts), I think that administrators have much to be pleased about, not only for OSX, but for OS 9.X, and Apple in general.
First of all, the difference between Apple's view on I.S. between last year and this year is amazing. Last year, if you mentioned supporting the Mac as an I.S. professional, the shields went up, and you got the "Apple is not moving into the enterprise market" speech, and that was that, end of conversation.
This year, it seems that Apple understands that I.S. is taking over computer support in the educational market, and that artists using Macs work for enterprise companies. They seem to understand that helping I.S. support Macs does not equate to Apple having a direct enterprise presence.
So as I was talking to various Apple people, they seemed genuinely interested in helping I.S. support the Mac and the MacOS. Maybe it's because they understand that MacOS X is going to be a much bigger product, in terms of things like rendering farms, server farms, etc.
Maybe they finally see that I.S. types can be an ally instead of an enemy. It really doesn't matter.
Or as one Apple person said, "K-12 is as big as any Fortune 500 company."
What matters is that at the documentation sessions, when I.S. folks were asking for better non-developer/non-user support and administration information, we got the answers that we didn't get last year. What matters is that during the sessions that dealt with the server aspects of OS X, when administrators were asking questions and requesting features, the Apple folks were asking us questions back about our questions.
If you're going to blow someone off, you don't ask for clarification. You say "thank you", and quickly move on to the next session.
I was pleased to see Apple understanding that if we are going to deploy OS X in the same space as AppleShareIP, that we will take longer to do that. Servers take longer to deploy than desktops, server budget cycles take longer to go through than desktop budget cycles, server testing takes longer than desktop testing, and Apple appears to understand this.
The answers I heard, and the conversations I had gave me a good feeling about Apple's attitude towards the people who support the computers that developers, educators, and artists create on.
This is not to say that I am entranced by Apple, and blindly going where they point. Like any company, they can change their song if they feel the need to. So I, like any good admin, will be watching how the walk matches the talk. But this is the first time in a while that the talk has been both good to hear, and realistic in tone and content.
It's a start, and that's better than we got last year.
On the OS 9 front, while Apple has been very clear that OS X is the future, and that the classic MacOS has a limited life span, I got a good feel that they aren't going to just leave the users of the current MacOS hanging. They are going to deliver needed improvements to the current MacOS, which makes sense, because from a business point of view, until they start selling OS X, it doesn't exist.
Apple also understands that there will be, in addition to the time frame leading up to the commercial release of OS X, a time when people will buy what they are comfortable with as a fall back from the new OS. Again, if you are looking for the classic OS to have major new versions and features, I personallywould not make that bet. The MacOS is a grand thing, but its needed to be fixed for a long time. OS X is that fix, and I welcome it. On the other hand, Apple seems willing to do those things that need to be done in the current OS.
Another thing for classic MacOS diehards to remember is that Apple supports products for a very long time after they have ceased to be a product. They supported the Apple II for many years after it ceased production, and considering the size of the current Mac installed base, I can't see them not supporting that base for a similar period.
Again, the important thing that I got from Apple is that while they want all Mac users on OS X as soon as possible, they understand that ASAP does not equal three weeks after release, especially in the educational market. The realism in this attitude is again, a welcome change, (especially if you remember things like Rhapsody and Copland.)
On the OSX front, the feeling I got was that while there was things that folks don't like about Aqua, most of us see them as not critical to the success of the OS. Considering that at least one developer has already released an Apple Menu for OS X, I think that most issues with Aqua will be handled, either by Apple, or the third party population.
Remember, MacHack is coming, and that is a very fertile source for improvements to the MacOS. And from what I heard around the conference, there are some very neat hacks already being started, so stay tuned for those.
On the non-Aqua front, the reaction was more generally positive. Apple is using OS X as a chance to fix some long-standing annoyances with the current OS, and they were happily taking ideas on just how this should work.
Security is of major importance to Apple, both in the classic OS, and especially OS X. The last thing that Apple wants or needs is a Melissa/ILOVEYOU debacle, and they seem to be perfectly willing to trade convenience for security. Apple is absolutely aware that a Unix base can open up all kinds of holes that don't exist in the current OS, and they are diligently working on keeping the script kiddies as frustrated with OS X as they are with the current MacOS.
In general, I think the upcoming months to the public beta of OS X, and the final release are going to be some of the most interesting in the history of the Mac, and I am looking forward to it. I would also recommend keeping a close eye on what comes out of MacHack, as I think that this year is going to be one of the most interesting in a long time for that conference as well.
So start planning, start thinking about how you are going to test the public beta, and what it will do for you.
Interesting things are afoot, and we as admins are going to be the beneficiaries of many of them. Only time will tell if that's good or bad.
Dealing with Virii
With the introduction of the "ILOVEYOU" virus, once again, email virii, and methods of dealing with them are at the front of every network administrator's thoughts.
Admittedly, as an administrator with far more Unix and Mac boxes than Windows PCs, I'm not in the same state of utter panic as many of my associates are. But even if I was in a position of having no Windows PCs, I would still make sure that my protection procedures were current.
Now there are a lot of ways to deal with virii, from the blind panic method of banning all attachments, (more than one pundit is saying this), to the simpler "Just say no to Outlook" method.
The first one is just nonsensical. Attachments increase the usefulness of email by a hundredfold. The ability to easily get the same data to large numbers of people who need it on the cheap is not going to go away, nor should it. Yes, virus writers, spammers, and even everyday folks sometimes do bad, or silly things with attachments. But ban them? You may as well get rid of email in that case, for without attachments, email is not much more useful than a phone with a good voice mail system.
Other methods that fall more in the middle of extremes are things like user education, setting up a virus scanner on your email server, ensuring that all files are scanned when opened or saved, etc.
The user education one is the most critical. Antivirus programs are only as good as their last set of definitions. If the virus is new and fast-moving, such as ILOVEYOU, or Melissa, then there is going to be a delay in getting the newer antivirus definitions out. Obviously, the virus can spread unchecked during this delay. If your users are educated, and motivated to use that knowledge, then the chances of opening a strange attachment is much less.
One of the big problems with the ILOVEYOU virus was that because it used the Outlook address books, the email was coming from a legitimate source, and once people saw that, they assumed that the attachment was legitimate too.
This is where education is needed. Users need to know that a certain amount of discretion is needed in dealing with attachments, regardless of source. I think that, outside of Bill Gates, or Warren Buffet, there are not too many people who could expect to receive a legitimate email from Dow Jones with "ILOVEYOU" as the subject. In almost every case of really bad infections, this is what was happening. The sender was legitimate, so the users opened the email, and boom. Infections. In some cases, the same person opened multiple copies of the same email from different sources.
Again, no antivirus program can substitute for education, and the will to use that knowledge.
Another set of solutions is to ban Outlook, the Windows Scripting Host, Exchange, Windows itself, etc. Although at first glance these seem silly, I wonder if some of them may not have value.
Feelings about Outlook aside, it has been the best vehicle for spreading virii over the last year or so, and none of Microsoft's updates seem to be fixing this. Before I get the emails on how useful Outlook is, believe me, I understand this. But security procedures, and virus prevention is a part of this, require you to eliminate holes, and as it stands, Outlook, Exchange, and the Windows OS act as huge security holes all too often.
The fact is, there are a lot of ways to get Outlook's functionality without the risks. For one, consider using a Unix - based IMAP email server such as Netscape/Sun's, or Stalker's, instead of Exchange. (I know that AppleShareIP has IMAP features, but it has some hard limits that make it unsuitable for all but fairly low-end implementations.) IMAP gives you easy access to your email, regardless of location, computer, or operating system. IMAP is supported by almost every email client available, which gives your users more options.
Consider a separate calendar server, such as Meeting Maker, or CS&T's Calendar server. These give you advanced scheduling features beyond what Outlook and Exchange support, and run on non-Windows OS's, which can eliminate yet another entry point for Visual Basic virii.
A good news server can give you collaboration and discussion capabilities, and if you need real-time features, there are a number of standards-based video conferencing servers from companies such as White Pine.
The advantage to considering multiple products is that you can tailor your solution to your needs. As well, Exchange has a history of not supporting anything other than Windows terribly well, whereas these products have full featured clients for almost all OS's and platforms, including Palm.
The disadvantage is that these are multiple products, and you have to manage them as such. You don't get one client that deals with all of them from one spot, (although Netscape/iPlanet comes close). Your education and training costs go up, because you have to learn how to manage these different products correctly.
That has always been a large part of the draw for Microsoft Exchange, the fact that you can get 90% of what you need with one product, on a relatively easy to manage OS.
So you have to make the choice:
Is it better to use Exchange and Outlook, and understand that you will have to be extremely proactive about email security and virus prevention and protection, but gain the simplicity of only needing one product?
Or, is it better to have multiple servers, possibly on multiple platforms, that allow you to avoid Outlook/Exchange and it's associated risks, but will force you to deal with the problems that integrating multiple products entails?
Unfortunately, that's the choice that all network administrators face, and there is still no easy answer.
created 23 April 2000
No matter how fast your network is, as an admin, you will always get the call for more speed. If you have switched 10Mbps lines to the desktop, you'll get the calls for 100. If you put 100Mbps to the desktop, you need Gigabit. I can guarantee you, no matter what you do, the network will never be fast enough for some folks.
Making things even harder is the never-ending array of new technologies that will make you network so fast, you won't even need hard drives in your Macs. Gigabit Ethernet, ATM, FireWire networking, Fibre Channel, high-speed switching, vLANS, the list is almost endless.
However, just blindly throwing money at a single solution may give you more speed, but it may also ignore the problems you are really having. Each of these technologies has a place where it fits better than others, and each of them has problems related to it's design and / or relative maturity.
First of all, on the desktop, you are going to be limited in what you can use. These days, the newer Macs all come with 100Mbps Ethernet, so that's a pretty good place to start. Most of your standard Ethernet equipment will handle switched 100Mbps without a lot of extra cost. Although you can use 100Mbps hubs and save some money, the shared bandwidth of a hub will give you less of a speed increase than you get with the dedicated bandwidth of a switch. Also, switches give you more configuration and management options for things like vLANS, and segmenting.
The next question is what about servers? This gets tricky, due to some limitations in the MacOS's networking architecture. The biggest problem is that the Mac cannot inherently have more than one active network interface per protocol. There is an exception to this in AppleShareIP, but that is strictly limited to AppleTalk, and AppleTalk's future is somewhat limited. Although OSX will lift this, for now, if you want more than one interface without third-party add-ons in the current MacOS, you can't do it.
This does not mean that you shouldn't consider higher-speed interfaces for MacOS servers. For things like print servers, RIP servers and the like, a Gigabit Ethernet card is still a good choice. One reason is that with a Gigabit card, the server can handle ten 100Mbps clients at full bandwidth. With some clever uses of switching, you can keep that Gigabit interface working at full speed, giving you maximum use of the available bandwidth, which in a heavy use environment can save you quite a bit of time and money.
Another area where higher-speed interfaces can help is with file servers. Technologies such as Fibre Channel allow you to set up disk arrays with access speeds that rival most SCSI implementations, and without the baggage that SCSI still carries, such as termination, LU numbering, etc.
In combination with a Gigabit card, you can get greater efficiency for all users of the server by eliminating bottlenecks on a few select Macs, and save money over reconfiguring an entire network. (I have seen too many servers with insanely fast disk arrays that are then attached to the network with 10 or 100Mbps connections into an almost-full hub, and everyone scratching their heads over why things aren't faster. )
Another advantage of Fibre Channel is that since it was first designed as a networking protocol, it can help you set up a Storage Area Network, or SAN. The advantage to a SAN is that the drives are not 'assigned' to a specific computer host, but exist as their own entity on your network. You can then have multiple servers accessing the same drive array, without one of them being dedicated as a file server. SANs are also OS independent, so if you have Windows NT/2000 servers, or Sun Solaris servers, they can easily access drives on a SAN as well.
FireWire is another interface that is coming into its own as a networking implementation. Recent announcements from companies like VST, MicroNet, unibrain, and others are showing that in addition to just being a way to get things like video into a Mac, FireWire can allow you to set up high - speed networks, or SANs with relative ease. Although FireWire is limited in the number of devices on a segment compared to Ethernet or Fibre Channel, the fact that all Macs come with it is a compelling reason to consider it, especially for server - to server communication. FireWire is also host and OS independent, so devices on a FireWire network can operate independently of any other devices, and do not need a host server.
Backup is another area that bears careful analysis for network speed. In this, the infrastructure of the network is more important than the interface speed on the backup server. This is mainly due to the relative slowness of the backup media and the backup process. Due to the stop / start nature of the backup process, and client load, even a fast tape drive will have trouble sustaining speeds in the 100Mbps range consistently. So for more efficient backup servers, you are better off ensuring that all your client Macs have clean, switched, 100Mbps connections. This will give you better overall results than slow connections on the clients, and a Gigabit card in the server.
Although I have mentioned only the current MacOS in conjunction with these products, there are Gigabit Ethernet solutions available from TeamASA, and Fibre Channel solutions from Micronet and Hammer that will run under MacOS X Server as well. Considering features of OS X Server such as NetBoot and QuickTime Streaming Server, getting as much speed from the hardware as possible only makes sense.
In the end, planning your high-speed implementation is as important as the implementation used. By carefully considering which technologies are best suited to the tasks at hand, you can greatly improve the speed of your network, without breaking your budget.
What is the 'best' computer?
created 19 April 2000
As a network administrator I am often asked, "What's the best computer?" Unfortunately, my answer tends to be a long series of questions starting with, "What do you want to do?" This question is a side effect of the computer industry, and for the network admin, the start of what is often a long series of headaches. People, (often having titles like "CEO"), want the BEST computer. What I have found however, is that there isn't one. Unlike a single task tool, such as a shovel, computers are designed to do, well, almost anything you can think of. With that kind of scope, it is impossible to pick one CPU, one operating system, one application and say "This is the BEST". But still the question persists.
What is really being asked here is, "What should we standardize on?" If you are running a network that makes use of almost any number of Macs, the follow up to this is "I think we should get rid of all the Macs because..." There is then a list of various reasons, some well thought out, some encouraged by certain other press outlets. In any case, the person has decided that somehow, having Macs on the network is a "bad thing". They have also decided that the only way to avoid the "bad thing" is to completely move to an 'industry standard computing platform'. Meaning Windows in 99.999% of these cases.
The problem for an admin, who has a network full of Macs that is running smoothly, and users who are getting their job done, is how to point out that standardization has much more to do with the work you are doing and less to do with the hardware you are doing it on.
One of the first things to do is find out why there is a push for standardization. Often times, you may find out about problems that users are having, but hadn't reported. In any event, sit down with the people behind the standardization push, talk to them. You can't counter arguments until you know what they are.
In many cases, the reason can be data exchange. Considering the misconceptions about what Macs can or cannot do that are endemic in my profession, do not be surprised if the same misconceptions are held by folks who aren't technically oriented. Make sure to point out that it is far easier to standardize on a document format for a given type of information transfer, than to completely rebuild a network. One of the best I have found is Adobe's PDF. Because it is content neutral, it can be an excellent final format for any kind of data, from word processing files to presentations. Other standards that are easier to implement are things like Word 8, or even HTML for word processing documents, Excel for spreadsheets, etc. These are not only areas that the Mac can meet or beat Windows in with ease, but it also concentrates the standards process on the output of work, not the process of work. In the scientific arena, the Mac versions of products such as Mathematica, and IDL are just as capable as their Windows counterparts, and with the speed boost these products get from the Altivec units in the G4, quite often a good deal more capable. (At the 1999 Apple WWDC, the Research Systems rep demoing IDL on a G4 said it was the fastest single-cpu version of that product in the company.)
If the reasons given are based on security, point out that there are viable, current, standards-based security solutions for the Mac, in such areas as VPNs, and SecureID systems. Be able to show that the Mac plays with internet standards such as IPSec as well as any other platform. Also, point out that in areas such as safety against cracking attempts, the MacOS is consdered to be one of the most secure platforms available. Point out specific real - world examples, such as the U.S. Army's MacOS - based web server being the only one that hasn't been successfully hacked in their recent round of troubles.
If one of the points to go to an all Windows network is because Macs are the cause of too much network overhead due to AppleTalk, show them that AppleTalk is no longer the necessity it once was. Macs are very capable of functioning in a 99% pure TCP/IP environment, without losing capabilities. You can bypass AppleTalk for printing, and use TCP/IP printing via LPR in all OS versions since 8.1 . In MacOS 9.0, the Network Browser not only does AppleTalk, but can function as an FTP client, and an LDAP client. Products such as MacNFS and DAVE, from Thursby Systems allow the Mac to connect to almost any other computer system in a TCP/IP environment. WIth MacOS 9, even personal filesharing is TCP/IP - based, and has less overhead than AppleTalk did. Again, the Mac has a definite architectural advantage here, as getting a Windows PC to deal with not only its native SMB networking, but things like NFS or other protocols is a more complex process than for a Mac. From the manageability standpoint, with the addition of SNMP in OS 8.5, and products such as Netopia's netOctopus, you can easily show that Macs are capable of being run in a coherent, centralized manner.
If there is still a push for 'standardizing' on Windows, don't be afraid to point out the costs of 'standardization'. Point out that forcing users to switch platforms is going to cost a lot of money. Not just for the initial hardware purchases, but for things like servers, network equipment, IS staff training and support, application replacement, user training and support. I have found that a lot of the folks who push for a 'Mac dump' are imagining replacing them with $400 computers, and when the real numbers are shown to them, are often quite willing to rethink the idea. Having to hire two or three more full-time IS people just to support an all-windows network is another cost that will never go away, and again, probably wasn't considered in the initial decision. Bring along a catalog showing the training costs involved in bringing your staff up to speed so that they can properly run an all-windows network.
Standardization, when used correctly, can be a time and money saver. As a Mac administrator, you have the obligation to be up to speed on the areas where the Mac is able to comply with industry standards, and where it is not. This is no different than any other platform. Make sure that someone from the IS group is a part of any standards body at your company, so that technical issues with changing or updating standards are adequately explained. Sometimes it can be a real bear to have to play the advocacy game, but if you consider the alternative, it's a small burden indeed.
March 28, 2003
created 17 January 2003
On behalf of anyone who has anything to do with the technical side of networking, I'd like to say this to the people who are just trying to get stuff done on a network:
(I know I should be writing about MacWorld Expo, but this is more important.)
I'm sorry that just using a network isn't as easy as turning on your computer, and making sure you have a wireless card, or a cable plugged in to the right place. Because let's be honest, TCP/IP networking, for all it does, is really painful to use. Manual addressing is a joke from a human perspective. I mean that. Think about the subnet mask from a non-geek perspective. It's ridiculous that humans need to deal with that. DHCP isn't much better, you still have to deal with DNS unless you have a DHCP server that handles that.
Even if it does, that still requires at least two servers. Why? Why do we need this idiocy just to make IP easy to use? Especially when we've seen that there's no reason for it. AppleTalk showed us over a decade ago how this should work. If you just wanted to connect stuff together on a smallish scale, there was no configuration, no nonsense about subnets, etc. You just had to make sure that the computers were all hooked together. You need to print, you could find printers easy. You needed to share files, QED as well.
What do we have with IP? Nothing in the same realm. Not even close. Why? Because the keepers of the keys of IP don't 'get' people. They can't understand why we want to 'pollute' their purity with such drek as easy service discovery, and name resolution, and zero configuration. They get very worried that a clustering app will break, (which it may), or that large networks will have problems, (they will anyway.)
But somehow, the idea of forcing people to buy DHCP servers for a little home network consisting of a printer and a laptop is okay. But it isn't. It's not even in the 'okay' realm. There is no reason to require this. But it is required. Which is a real shame. Because it limits the inherent usefulness of networks. It forces people to have half a dozen cables for everything. With Gigabit Ethernet, you have fast enough transfer speeds that you really only need other interfaces for vertical market reasons.
Think about this...what if TCP/IP was as easy to use as FireWire? What if you could just use IP for all your connectivity. I mean external hard drives, cameras, MP3 players, all of it. No futzing about with DHCP, or other server - based configuration options, just plug and play. Connectivity would be dead simple, everything's a network node. It would really help drive high speed access in places that don't have it. You could have an 802.11g MP3 player in your car, and sync that with iTunes in your house, so you wouldn't need your iPod in the car.
Don't misunderstand, I wouldn't personally use most of it. I'm a bit of a luddite at home, I need that break occasionally. But I can see there's potential there. Unfortunately, right now, it's all stalled because there's no standard way to do brain - dead networking without servers, etc. It's stalling a lot of progress, that could make things a lot better for a lot of people.
Not just at home either. Having a standard way to have medical gear talk to each other without configurations, or configuration servers would be a great way to make that equipment cheaper. All it needs is a network connection to communicate. The same for law enforcement.
It should be easier, but it isn't, and in 2003, that's just inexcusable.
Ease Of Use
created 3 Dec. 2002
I was listening to an interview of John Gruber of DaringFireball.net by Shawn King, on Shawn's "Your Mac Life Extended" segment this week. The interview was about a post Gruber had put on his site about perceived deficiencies in the Mac OS X Finder, and I felt that while Gruber has some good comments, he kind of hit a lot of button issues that I think need to be commented on. Gruber articulated a lot of things that a lot of people believe, but they may not be as right as they think.
I'll say right off that I think he's having a severe attack of "The Good Old Days". Don't get me wrong, the Mac OS 9 Finder is a good piece of work, and after 17-18 years of optimization, you'd expect it to be. If you view Jaguar as version 2 of Mac OS X, then compare what would be System 2 to Jaguar. I'd expect that in 15 years or so, the Finder in Mac OS X will be a much nicer piece of work.
But he also makes some statements in the interview that made me wonder. For example, he essentially say that no one designing UI at Apple understands UI design. From where I sit, this is nonsense. Mac OS X is a different UI, and some of the decisions in that UI, like Mac OS 9's have been bad, some have been good. But, if you never make a mistake, you are not trying. Getting too locked into a UI design, and refusing to try something new because "We don't do it that way" is a trap, and that attitude hurt the Mac more than it helped it.
Now, this is not to say that you change things for the sake of change. But if there is a good reason for a change, even if people don't agree with it, then try. Stretch your wings. The Mac OS 9 Finder was a static thing. It was as done as it was ever going to be, and it was never going to grow in any new direction. That's a sure sign that it's a dead thing too. If you are afraid to try something new because you might make someone upset, you stagnate and die.
Remember, Copland didn't die because Apple couldn't write an OS. It died, because every time the MacMacs complained about something, Apple caved, and added yet another feature, changed the target, and bloated the code a bit more. Copland went from a PPC/PCI - only OS to running on everything that Apple made with a hard disk. Copland couldn't even achieve a code freeze state, much less getting to the shrink wrap stage, so the only sensible choice was to kill it.
I'd also say that this idea that Mac OS 9 is so easy to use, it's as if you were born with the ability to use the Mac OS 9 Finder like you use a pacifier is tripe. There is no computer UI that is inherently easy enough to use as to not have a learning curve. It is a non-physical, two dimensional representation of magnetic impulses on magnetic media, transistor states in memory or pits in an optical disk. It is how we manipulate 1's and 0's. There is nothing in human instinct that deals with that. Maybe in a couple of thousand years, but not now.
So we have to learn how to use any OS, including (Heresy!), Mac OS 9. I've found that if student and teacher have an open mind and a proper attitude, Mac OS 9 is no harder or easier to learn or teach than Mac OS X, and I've taught both to all skill levels. But when you've used a thing for many, many years, you "forget" all the learning you had to do to get to the unconscious ease of use you now have, which is where most Mac 'fogeys' are with Mac OS X. They're back where they were when they first learned how to use a Mac, far below where they were with Mac OS 9, and isn't that an ego buster. (For those about to complain that there really is something magically easy about the Mac OS 9 Finder, think about how hard it is to explain that each mounted drive has it's own desktop folder, and there really is no way to easily distinguish between them to a newbie...)
The Finder organization complaints have some validity, but they are partially bugs, and partially silly. If you don't set your desktop to be pre-arranged, then you can drag stuff everywhere, and it will generally stay there. If, as in the OS 9 Finder, you arrange by name, then it does just that. The OS X Finder does have a really annoying mild case of "Settings Alzheimers", but that is fixable, and has gotten better.
I also find it curious that he talks about how he wants the Mac OS X Finder to be simpler, but then wants Apple to implement different types of windowing methods depending on how you're viewing a folder. That's increasing complexity, and decreasing consistency, and this would be an improvement?
He also advocates creating what would become a second Finder, namely the "Column view Finder", which would behave in an inconsistent fashion, depending on if you click, or double click on an icon. Again, there is no way that you can seriously advocate adding two or three layers of complexity to the Mac OS X Finder, and somehow expect that to make the Mac OS X Finder behave like the simpler, more consistent Mac OS 9 Finder. You do not achieve simplicity by adding on, you achieve it by taking away. I won't even get into the AppleScript nightmare this would create beyond a shudder at trying to deal with Folder Actions under Gruber's ideas.
There's a tendency to talk about the Mac OS 9 Finder as being "better" or "worse" than the Mac OS X Finder. Well, it's both. It totally depends on the individual user. I talk later in this article about efficiency improvements in Mac OS X's Finder. Well, that's totally true.
I work how I work, you work how you work. For me, going back to the OS 9 Finder is agonizing, because it doesn't, and in many ways never did work the way I like to work, whereas the Mac OS X Finder does. For someone else, they Mac OS 9 Finder was far more in tune with how they work, and Mac OS X's Finder is painful. If it sounds like I'm saying that much of this 9 v X argument is completely subjective, well, that's correct. I am saying that. But there are some issues to consider about this argument.
Another point to consider is that because of Mac OS X's structure, had the Mac OS 9 Finder been kept, it wouldn't be the same beast it is in Mac OS 9. Mac OS X has too many operational differences that would requiremajor changes to the Mac OS 9 Finder.
I also found one aspect the Mac OS 9 Finder to be amazingly infuriating. The way that it hid a lot of useful things from new users, and then made using them more cryptic than they needed to be. cmd-clicking on the title bar to get the path to the current folder. This is an incredibly useable thing, but it's hidden in OS 9. It used to make me insane to hear some Mac fogey hitting a newbie with some arcane list of bizarre key combos that you had to have to really hit your stride with the UI. If it's there and it's useful, make it obvious in the UI. I find that Mac OS X does a far better job of this. The Zoom box is another example. For newbies, this is just that weird thing that changes window sizes. It's not more useful in Mac OS X, but the icon has something to do with the function at least.
His comments on the Window = Folder relationship is interesting, because it's also sometimes the hardest thing to teach someone...that in Mac OS 9, a window is a folder. You spend all this time dealing with folders as containers, and surprise! it's a view too! This isn't just an issue for mental acclimatization to Mac OS 9, it's an issue for scripting too...that folder/window relationship makes scripting the OS 9 Finder a bit of a PITA. I find it far easier, under Mac OS X, to show that a window is just the way you look at folders, applications, hard drives, etc. This is a level of abstraction that is more natural.
Think about it.
Under the Mac OS 9 paradigm, to see what is in a room, you have to go into the room (icon view), or, if there are subrooms, you can open up the walls and look at them from within the room (list view). Unless you know the secret shortcut key, you have to leave every room you've been in with no walls, until you put the walls back on. If you do know the secret key, then as you move to a new room, all doors in the room shut, and there's no obvious way to see where you are in the building.
You can't be in a room, and easily see all the way back out to the building through the doors you left open behind you (column view in Mac OS X), unless you are still in the main lobby, (hard drive root), and have removed the walls from every room between the lobby and the room you want to see (expanded list view), or leave every room you've been in with no walls. There're no hallways in Mac OS 9.
Movement between rooms is not real efficient either. You either have to climb through a window that opens onto an adjoining room, (folder in the window), remove building walls, (expanded list view), or back out of that room, the same way you came in until you find another window that opens onto the correct path of adjoining rooms, even if it means going to the entrance to the building itself (desktop/har drive root). You can always create a new hole to move directly to that room (alias), or you realign the rooms to make your life easier.
Since Mac OS X uses windows as a way to view a room, you can just wander down the hall to the room you want to go to. You still have to go through rooms, but, you get to use doors. You can easily look back through the doors you left open behind you, and take a different door. You can see every door in every room you've been in, but the walls of the room are in tact, you're just looking through the window. You can even take a direct tunnel, (go to folder), but it doesn't require creating a new structure (alias). If you feel more comfortable with the Mac OS 9 paradigm, use that as well, but it's all within the same app. There's no 'special' Finder that does this in Mac OS X.
The physical interaction of the OS 9 Finder was quite tedious at times. If you wanted to move from your desktop to a folder that was 9 levels deep, and you didn't have an alias to that folder, was something like this:
Option 1: Clickclickclickclickclickclick.... 9 double clicks, and that?s assuming that you can get to your next target without scrolling...if you don't know the option key trick, since it's poorly documented, and not obvious in the UI, you have 9 windows.
Option 2: you know about spring loaded folders, so it's a click and a half, and hold...delay...boing...find the next target, gotta not let go of the mouse button, gotta not scroll outside the window, oh crap, wrong folder, move outside of that window, but not the creation window...real efficient
These lead to:
Option 3: this is of course what you do after going through options 1 or 2 too often...you put an alias to the folder on your desktop, rearrange your folder structure, or you put an alias to that folder in your Apple menu.
Let's get this straight...this is NOT an inherently efficient way to browse your directory structure. If you are putting aliases to stuff hither and yon, and rearranging things to be shallower, then the Finder is so inefficient that you have to work around it. This is not the mark of "The best computer program ever written".
The Browser metaphor for Mac OS X is not like a web browser, which is a stateless static view of data, with some dynamic capabilities. In Mac OS X, with spring loaded folders, and horizontal scrolling in the browser view, you can now move files in multiple directions far easier than you could. As well, the (quite intelligent) borrowing from Windows of the ability to cut and past Finder objects, like documents, folders, and applications, in combination with the browser view makes the Mac OS X Finder a far *more* efficient way
Again, for a physical analogy...with Mac OS 9, you have to drag your stuff behind you as you move to the final destination of your stuff. the only quick way is to get the destination next to the start, and throw the stuff through the windows of the two rooms. Under Mac OS X, you can certainly do it this way, or, you just mark the stuff you want to move, walk to the destination, and hit the transporter button. Poof, stuff's here. Even better, it's a copy, so if something scrambled the signal, the original stuff is fine. Then you walk back to the source, and chuck out the old stuff.
Direct manipulation is not always the best way to do things, and the Mac OS 9 Finder was crippled in many ways by its total reliance on direct manipulation.
Another problem I saw in the interview was Gruber's vague attempt to attribute the lack of adoption of Mac OS X to UI problems. This is simplistic in the extreme, and as silly as Apple's attempts to attribute the lack of adoption to some magic bullet application. Gruber's assertion may be true for a small percentage of Mac home users, but he leaves out a lot of issues. Most of the K-12 market couldn't move to Mac OS X until Mac OS X 10.2 and 10.2 Server came out. Not because of any UI fixes, but because prior to 10.2, there was no equivalent of Macintosh Manager and Netboot for Mac OS X. In a K-12 setting, there was no way to roll out machines without ways of locking them down and managing them.
At the University level, the relatively poor level of LDAP support prior to 10.2 was an issue. For many Universities, it was an issue of application support, and support personnel support too. It takes time to set up your help desk, it takes time to train people. Remember, University level students tend to take care of themselves. But university employees still need to get paid, and for that you need payroll software, and other things that only now are being supported in Mac OS X.
Up until Mac OS X 10.2, Apple didn't really have an OS with any sort of support infrastructure for any level of the educational market, so they couldn't really upgrade en masse until recently. The same holds true for corporate Macs. Prior to 10.2, getting a Mac OS X box into a state where it was compatible with management tools was tedious, and there were a lot of needed features, like remote installs, etc, that were missing without additional cash expenditures. That was a real drag on Mac OS X adoption rates, and it was almost entirely Apple's fault. But none of this was due to a bad UI.
He then dismisses a real problem in the Mac community, and it's our dirty little secret. We hate change. Oh we want Windows users to change, and Unix users to change. But we get really strident at any changes in OUR stuff:
- "Whaddya mean I have to deal with this MultiFinder thing? You only need to run one program at a time!"
- "Whaddya mean I can't turn off MultiFinder in System 7?"
- "Whaddya mean I need 4MB of RAM for System 7?"
- "Whaddya mean there's now color in the OS? Black and White is all you need!"
ad infinitum. I like using Macs, but I really do believe that the Mac community is unhappy unless they have something to complain about in a loud fashion. (Sidenote: Clarus is dead, the smiley Mac at boot is dead, both are dead, just like Elvis. Move on already)
Again, I think John Gruber and Shawn both raised some valid points, and none of their arguments should be dismissed without reading/hearing them and thinking about them. But they're not as right as they may seem either.
created 24 Sept. 2002
Journaling in Mac OS X 10.2.2, what's up with that?
So, as everyone has seen since the release of Mac OS X (Server) 10.2.2, you can now enable journaling for HFS+ in Mac OS X Server. You can enable it in Mac OS X 10.2.2 non-server as well, via the diskutil command. However that version of the OS isn't optimized for journaling, so you may see a rather severe performance penalty for disk writes on straight Mac OS X 10.2.2, although my own tests haven't indicated a noticeable problem.
The first question here is what is journaling, and why do we care about it? When you have a journaled file system, any disk transaction that results in a change to the disk, (think anything that isn't a read here) is logged in a journal file, (hence the name.) This journal keeps track of the transaction, and the disk information on the transaction. If you work with databases, the journal is like the list of uncommitted transactions. Periodically, transactions are 'committed', or marked complete, and removed from the journal file. (This is necessary to avoid having truly obscenely large journal files.)
The journaling operation itself does impose a performance penalty on disk writes. Mac OS X Server alters the sizes of certain buffers used for file transactions when journaling is enabled, which mitigates much of the performance hit to take it from the 10-15% range, down to the 2-5% range, for a system with 512MB of RAM. The more RAM you have, the more buffering can be used, so your performance hit decreases accordingly. This buffering does not occur on Mac OS X, which is one reason why Apple is not supporting or recommending its use on non - Mac OS X Server systems. Also, it's not really needed for non-server systems, so if you don't need the protection, or can't take the speed hit, you may not want to use it on non - Mac OS X Server systems.
The 'magic' of journaling happens when your system crashes, or you have to reboot in anything other than a standard manner. Normally, in these cases, the disks are considered 'dirty' or in an unknown state, so the entire disk has to be checked via fsck on the reboot. This can take a few minutes on a 40 - 60GB disk, but if you have external or internal RAIDS in the multiple hundreds of gigabytes to the terabyte range, this checking can take hours, as the entire file system has to be checked, even though not all of it is in a potentially damaged state. If journaling is enabled, then the check gets more efficient. The journal file is 'replayed', and the information in the file is used to check only the parts of the file system that are truly in an uncertain state, and repaired based on the information in the file. So instead of checking an entire 600GB array for damage, only the 100 uncommitted writes are checked. It's the difference between having an error list as a guide and having to find the errors on your own. This allows the post-crash reboot times to be far faster.
(In tests on my 800MHz TiBook, the reboot times were decreased by half. On a single - CPU Xserve, the time reduction was much greater, due to the time saved on the 240GB RAID.)
So, if you have a need for increased data protection, then journaling may be a feature for you to consider. The next issue is; what changes happened to HFS+ to enable journaling? Well, from talking to Apple, not much. Talking to the Mac OS X Server folks at Apple, they told me that making journaling compatible with current setups was very important to them. So it's implemented in a fashion that is transparent to most applications. There are no changes to HFS+ that would require a reformat, or a conversion of any kind. Journaling can be enabled or disabled on the fly, via the Disk Utility application. The applications that would be affected the most by this are of course, disk repair utilities. My testing was limited to fsck and DiskWarrior, and showed no real issues. Apple has released a knowledge base article on using fsck with journaled volumes, that covers how to enable it as well. The only issue I have seen with DiskWarrior is that it invalidates the journal file, but that is most likely related to having to boot into Mac OS 9 to run DiskWarrior. No damage was caused, and re-enabling journaling took seconds, but it is a caveat. I don't use Drive 10 or Norton Utilities, so I didn't test with those.
The way journaling is being implemented in HFS+ indicates that you get a journaling addon to HFS+. It is not making HFS+ into a journaled file system, or JFS. There are more things to a ground-up JFS than a journal file, although you get most of the benefits with Apple's implementation. I have used, on Solaris systems, third party disk software that allows you to make a UFS disk into a journaled system, without having to reformat, and without changing the essential structure on the disk, and Apple's implementation reminds me of these.
Since I brought up the DiskWarrior caveat, there are a few more things to watch out for with journaling. First, Knowledge Base article 107252 points out that journaling and disk quotas are incompatible with each other, so if you need one, you can't use the other. Journaling in 10.2.2 is limited to HFS+ only, so if you use UFS, you are out of luck. If you use it on a portable drive, and move it in between 10.2.2 systems, then the journaling will follow the drive, so you keep the benefits for it there. (I've seen reports of people using it with iPods...okay, yes, you can do this, although I don't see the point in it.) But if you move a journaled drive to a system running an earlier version of Mac OS X, then the journal file is invalidated, and you have to re-enable journaling when you hook the drive back up to a Mac OS X 10.2.2 system. Mac OS 9 simply ignores the journal, but since you will have transactions that don't make it into the journal in that case, you should re-enable journaling after attaching a journaled drive to a Mac OS 9 system as well. With regard to the RAID software used on Xserves, journaling shouldn't affect rebuilding a mirror, but I haven't been able to test that, so caveat emptor in that case.
Now, should you use journaling. Well, my usual answer to that type of question is "If you're asking, the answer is most likely no." It's a very useful feature if you need that level of data reliability, but it has its penalties as well. Remember, just because you can do something, doesn't mean you should. If you have a test system, run it on that system, and get your own results. My lack of problems is only valid for my setup, not yours. Or yours. If you aren't running Mac OS X Server 10.2.2, then journaling puts you at odds with Apple Tech Support, so that's a consideration. As well, journaling does not mean you get to stop doing backups, or throw away your disk utilities. It helps data and disk integrity, it doesn't guarantee it. But from what I can see, it's a stable implementation that brings more good to the table than ill.
The State of AppleScript
created 24 Sept. 2001
The state of AppleScript
Well, in a nutshell, it's pretty inconsistent, which ends up equating to bad.
AppleScript is one of the most critical technologies on the Mac platform, and yet unlike some other high profile technologies, like WebObjects, Apple seems to be quite divided on AppleScript. I tend to look at this as "Good Apple" and "Bad Apple".
Good Apple trots Sal Soghoian, the AppleScript Product Manager out on stage, does AppleScript demos, brings back features like folder actions, and allows them to work on closed folders, making them far more useful than they ever were in 9.
Bad Apple keeps AppleScript out of some of Apple's biggest applications, and doesn't give the core AppleScript team nearly enough support in making sure that AppleScript is omnipresent throughout Apple's product line, and indeed, has certain product team members pooh-pooh the idea, as when i talked to Final Cut Pro folks about this at Mac World Expo, and was told, "You can't script the creative process." This is a really odd thing to say when talking about a product that lists, as a feature:
"Create custom FXScript plug-in filters and transitions using the FXBuilder scripting language."
in the Compositing and effects section of the Tech Specs Final Cut Pro page. So you can script plugin creation, which, when you realize that many plugins are created to simplify certain actions, is, in essence, scripting the creative process. Contrast this with Media 100, who not only views automation as a valuable technique, but views AppleScript as an excellent way to achieve this.
DVD Studio Pro is even worse, as it specifically mentions scripting as a feature...but not AppleScript. I asked the DVD Studio Pro folks about AppleScript, and got essentially the same brush off as I received from the Final Cut Pro people. Well, they are partially right, but partially wrong, and the way they are wrong is literally costing Apple money and customers.
Creativity can't be scripted, but the real world says that you will always have far more production people than creative staff. For every artiste pushing the envelope with Final Cut Pro, you are going to have ten people creating the same basic ad spot over and over again, with the only change being the film and sound clips, and a bit of the titling. That's the key word. Production. Production makes you a lot more money than creativity, and Production is very scriptable. Just ask Showtime, who use Media 100's AppleScript features to automate their production work.
It's not like this trail hasn't been blazed already. Cal Simone and Main Event Software, with PhotoScripter, a plugin for Photoshop that was not much more than a big fat AppleScript dictionary for Photoshop, and showed that there was all kinds of room for automation in the creative world. If you think about it, it's eminently logical. If your creative people aren't wasting their talents on repetitive monkey work, then they can be doing something oh....creative maybe?
Adobe obviously agrees, as every new version of their product line of late has a very thorough AppleScript implementation. (Okay, so Acrobat's scripting is really quite wretched, but that's such a bad OS X port that I'm really ignoring it until the next new version comes out.) If Adobe, and Quark, (perhaps the only time Quark has had its head on straight is with AppleScript.) can make their products scriptable, then Apple's 'creativity' excuse holds less water than a mesh bag in the desert.
An even greater refutation of this line is iDVD...which is now...scriptable. Hmm...so now a free application is a better production tool than a thousand dollar application. Just because some people still don't get it, here's a way you could make money with a scriptable Final Cut Pro and DVD Studio Pro: Video tape conversion.
Face it, VHS tape is not set up for long term storage and being played at every possible chance. it deteriorates, it loses fidelity, it just sits there and dies. In comparison, DVDs are tanks with regard to shelf life and deterioration resistance. So, if you could script Final Cut Pro, you could get a tape in from a customer, and view just enough of it to set some basic conversion parameters in Final Cut Pro. From there, you hit a script that says, "Process this tape with current parameters into a DVD Studio File with the following settings." Once that's done, DVD Studio Pro, activated by the folder action that Final Cut Pro initiated, takes that file and creates a DVD from it, or dumps it to DLT for larger tapes. The point is, you can go from VHS to DVD with very little human intervention. You can create a relatively large set of DVD templates, and use the one the customer wants, by just using a different start script. You can do all of this on a lone machine in the corner that doesn't have a human on it watching nothing happen. Since AppleScript can trap errors, you can even set up notification routines for errors, or lack thereof.
See, there's a business that you can't do on the cheap, because you can't automate production in Apple's high-end applications. They don't get it. But if iMovie gets scriptable, and I can script burns in iDVD, (one of a very few things you can't script in iDVD), I'm starting a business...
This is not the only place where Apple's split personality hurts them with regard to AppleScript. Applications like Mail, Address Book, and iCal are AppleScriptable, but the implementations are really mystifying.
For example...in the Address Book dictionary, I can set the middle name of a contact. But the Interface doesn't have a middle name field. As far as Address Book is concerned, the only difference between a work address and a home address is a text label. The same for email addresses. Yet there are distinct items for each kind of instant messaging vendor. That's right, there's more emphasis on your IM entries than where you live and how to talk to you or email you. IM's cool, but no one is IM'ing me a check. The Address Book UI allows me to import and export data, (but not as tabbed text, which is still the only reliable way to do this), but the AppleScript implementation doesn't.
iCal has no way to make an event 'all - day' other than monkeying with the start and end times. The repeat/recurrence of an event is "The iCAL string describing the event recurrence, if defined". That's clear. Microsoft Entourage has the same type of entry for recurrance, "the iCal recurrence rule", but adds a simple boolean true/false test for 'Is the event recurring at all?" The same goes for all - day events. Now, both of these are read-only via AppleScript, which is silly, but it beats iCal in at least being able to quickly check for repeat/no repeat, and all day/not all day.
iChat is not scriptable at all, yet AOL Instant Messenger has scripting terms for almost every parameter that you would use in an IM session, including file transfer, and buddy interaction. Hmm...let's see, you need to transfer a file to someone, but it's too big to email, and they need it faster than you can send it via snail mail. They have a fast connection, but are in Hong Kong, you're in New York. Seems to me that you could use Entourage's ability to run a script from a rule and the AOL AIM client so that when the person in Hong Kong was up, they send you an email with some specific subject line, which causes Entourage to kick off a script that starts up AIM, verifies the Hong Kong person is online with their AIM client, and then transfers the file via AIM. If they aren't on AIM, tell Entourage to send them an email that tells them to fire up AIM on their end, and then resend the original email. Here's the kicker...that idea took me about three minutes to cook up. This is not rocket science. It's just thinking different.
ICQ doesn't do file transfer, but has a nice dictionary for what it does do. Yahoo Messenger isn't scriptable, but it wouldn't surprise me if they figured things out before Apple does.
Mail doesn't allow for AppleScripts from rules, and isn't useful in an automation process because someone decided that a way to 'secure' Mail from malicious scripts was to require you to manually answer a dialogue before running a script. Well, that's not security at all, and all it does do is cripple Mail's usefulness here.
It's not just iApps, or artiste applications either. None of Apple's Server management tools are scriptable. We're on the third major release of Mac OS X, and the second with decent AppleScript, yet I have to manually create printers. Network settings are only settable manually or via the command line. Yet NetOctopus, and Timbuktu Pro, both from Netopia are scriptable. Intermapper can at least send emails of notifications via AppleScript. Retrospect is highly scriptable, as are almost all the other third party network management applications. Even the next version of DiskWarrior will have AppleScript capability.
So lets see...if I want the leading email AppleScript implementation, I go to...Microsoft. For creative professional applications, i go to...Adobe and Media 100. For contact management scripting, i go to...Microsoft or Power On Software. For IM client scripting, I go to...AOL. For systems management i go to...Netopia/Alsoft/Dantz/Dartware. At exactly what point other than the OS does Apple take the lead here? Oh wait...iDVD.
AppleScript isn't new. The idea isn't new. I was willing to forgive first releases of applications that Apple bought, but neither Final Cut Pro or DVD Studio Pro are 1.0 applications. There is absolutely no excuse for Address Book to not be the example of contact management scripting. The same for Mail, iChat, etc.
Apple cannot stand on stage and say "AppleScript is a critical OS technology, and then have mediocre, or non-exisitent AppleScript implementations in their own applications. This tells developers, "Oh, don't really worry about AppleScript, heaven knows we don't really care about it." Yes, I know that each application team is responsible for that product's AppleScript implementation. That's still no excuse. Everything that Apple ships needs to be fully scriptable, and the shining example of scripting for that type of product.
If Apple can't be expected to eat the AppleScript dog food, why should anyone else? Considering the number of times that AppleScript has kept Apple customers as Apple customers, that's a really silly attitude to have on Apple's part.
Macworld New York 2002 Not the Keynote
created 1 August 2002
So, outside of the keynote, you still had some really interesting stuff...just not interesting to the general user community.
The one that really jumps out at me is Matlab running natively on Mac OS X. For those who don't know, Matlab is considered the engineering modeling application. Companies like Bose live and die with Matlab, and if a platform doesn't run Matlab, then it gets the boot. For a lot of companies, the former lack of continuing Matlab development doomed the Mac to at best, a niche platform. But I saw Matlab, with the Simulink package running natively at the show, and the folks there anticipate a release in a month or so. It's not using an Aqua interface, but rather an X11 - based interface. This is not really a big deal, as the people who use Matlab use X11 a lot anyway. This also allows you to set up an Xserve as a Matlab server for other Unix clients...great stuff! The Matlab folks told me that anyone with a multi platform license will automatically be able to use the OS X version under that license. This is not terribly sexy, but it is terribly important for the platform, especially in SciTech.
Along the lines of important but not sexy was the Oracle stuff I saw at an off-site event. It was Oracle 9i running natively in Mac OS X. The Oracle rep. told me that they plan to ship the developer version when Jaguar ships, and that the early 2003 timeframe for the full 9i server and clustering technology was still a good estimate. (This timeframe was announced I believe by Apple/Oracle in May of this year at the Xserve rollout.) Again, the common user isn't going to get all wiggly over Oracle 9i clustering, but to the IT geeks, this is major good news. Oracle is still the top, (or a close number two behind DB2) enterprise database system. Having the full 9i, and the clustering technology running on Mac OS X is a major coup for Apple, and will confer the kind of legitimacy in the enterprise market that you can't get from anyone else, except maybe IBM.
Oracle was not the only enterprise database vendor at the show however. The Sybase folks were announcing that they would have their ASE server ready in a month or so. While not as popular as Oracle or DB2, this announcement basically means that the Mac is soon going to be able to host 2 out of the four top enterprise database environments, (Oracle and Sybase, leaving only DB2 and Informix not on the Mac. I don't consider a single platform database to be truly enterprise quality, which is one of many reasons I don't rate SQL*Server as a participant at this level. Maybe one day, when Microsoft gets out of their insecure six-year-old mindset, they'll allow their products to stand outside of the Windows wall.), which makes the Mac a far more legitimate choice at more levels of the enterprise than ever before. This is critical in expanding mind and market share.
Yet another pickup truck application announced for Expo was version 5 of 4-Sight Fax client from the folks are Soft Solutions. This is just the client for their fax server product, although they plan to have the server ready this fall. Even though the server hasn't been released yet I'm still as happy about this product. It is all TCP/IP based, can email your faxes to you as PDF files, and can email PDFs to the server to be faxed out. It supports Mac OS X/9 and windows via native clients as well. The client was being written in Java, so in theory, it should support many more platforms as well, but how well remains to be seen. This is another one of those dull-boring applications that shows the enterprise market that you can use a Mac to get real, dull, boring work done, which is the kind of thing that floats the enterprise boat better than anything you'll ever do in Maya. Not a 'sexy' announcement, but important nonetheless.
One area that should have made me far happier than it did was AppleScript. Not AppleScript itself, which is still one of the major reasons I love the platform, but rather Apple's odd issues with it. Final Cut Pro and DVD Studio Pro are still not scripted. When I ask, I get variations on, "You can't script the creative process." Well, no, but I sure can script the production process, and guess what, the production process always generates more work than the creative process. For every zoomie-cool Final Cut effect, there are hundreds, perhaps thousands of hours of repetitive production effects that vary only slightly, if at all. Station identification effects, upcoming feature announcements, none of these are that creative once the initial build is over. With Media 100, which is scriptable, you can create a lot of work, with very little human intervention. Just ask Showtime how nice this is. It's nice enough that they use Media 100 instead of Final Cut Pro.
What's even sadder is how a good dictionary can make an application nearly infinitely more useful than it otherwise would be. If iMovie were scriptable, I could set up a business that ran old VHS and Beta home movies through a DV Cam into iMovie, which could then output it to iDVD, (Which is scriptable), which would then burn it to DVD along some predetermined lines. You can make real money off of this, and right now, because of AppleScript, iDVD is a far better production tool than DVD Studio Pro. Why? Automation. Some of the things I saw that you can now do with iDVD just blew me away, and for a free application to be able to make the 'pro' version look amateurish is just...well...dumb.
I love AppleScript, and I'm totally jazzed about the things that I saw Jaguar able to do at the show, but come on Apple. There is no excuse for Final Cut Pro to not be scriptable in version 3. DVD Studio Pro is in it's second release, where's my dictionary? Where's the dictionary for the GUI admin tools in Jaguar Server? Here's the way to determine what are the minimal capabilities you should make scriptable:
all of them.
Because if you do, you'll see your applications selling to people you never thought would care and being used for things you never thought possible. If that isn't "thinking different" I don't know what is. AppleScript is a core, critical technology, and Apple needs to reflect that in all its applications.
Jaguar Server, leaving aside the AppleScript issues, is poised to be the server that Apple has needed for a long time. Finally, you will be able to easily NetBoot and manage Mac OS X machines. It's also adding full LDAP v3 server and client support, which means that you will be able to integrate Mac OS X into any LDAP or LDAP - compatible environment, like Active Directory, or iPlanet/Sun ONE Directory Server, Novell's eDirectory, or OpenLDAP. This is a critical issue for Jaguar Server, and OS X in general. While Mac OS X has had some basic LDAP functionality, it wasn't LDAP v3 compliant, and it had a lot of issue that made it quite painful to set up. Perusing the recently released Administrator's guide, (available on the Mac OS X Server site), it's obvious just how serious Apple is about LDAP.
They are trying to make the LDAP integration as easy and as thorough as possible, and are attempting to make it as simple as possible to use existing custom schema via a one button 'From Server' option. LDAPv3 over SSL is now available as well, so that critical data isn't being sent as clear text. If you have your existing LDAP server using DHCP to advertise itself, Jaguar Server is able to handle that as well. If you have to modify your LDAP server mappings, you can do that locally on Jaguar Server, and then write them back up to the LDAP server. If the implementation works as good in the server room as it looks to on PDF, then Apple deserves major congratulations for this level of upgrade. Note that while NetInfo is still a major part of Mac OS X, the simple fact is, it never caught on with anyone outside of the NeXT/OpenSTEP world. LDAP is the open standard that the world is using for directory services, and Apple is to be applauded for recognizing this, and acting appropriately, rather than retreating into a NIH funk.
Jaguar Server is also finally going to allow people with Mac OS X clients to manage them at the same level as Macintosh Manager allowed them to manage Mac OS 8/9 clients. This is a critical function, and has been a hard reason for the slowness of Mac OS X's adoption by the K-12 market. Macintosh Manager, for all its growing pains, has allowed people who aren't IT administrators to run fairly large networks of Macs in a coherent way. When you are talking about kids, who are smart, and live to test boundaries, this is not an easy job to do. Up until now, there was no easy, or even remotely 'non admin-friendly' way to run Mac OS X clients. Well, the new Workgroup Manager in Jaguar Server should fix that. It works with the newer features in Jaguar, such as the return of the Simple Finder, and the ability to restrict access to settings and applications for non-admin accounts, even without Jaguar Server. (As a parent, I definitely appreciate this ability. It allow me to decide computer use for my child, rather than expecting Apple, or the government to do this, which is how it should be.) In addition, it can control iDisk access, Classic features, and adds disk quota limits for home directories, a much needed feature, along with print quotas, which should make Mac network administrators much happier about implementing Mac OS X.
Another feature long missing from Mac OS X Server 10.X has been the ability to share printers via AppleTalk, which always seemed like a fairly silly omission for Apple. That's back, which is a huge relief for people with older machines that couldn't support SMB or LPR printing. The improved print queue logs will make it easier for people who need to implement chargeback mechanisms for printer usage on their networks.
Netbooting of Mac OS X images is now supported, another welcome feature that has been holding k-12 schools back from Mac OS X. As well, the ability to install Mac OS X software across a network is now included in Jaguar Server, a feature long missed by Mac OS X administrators.
From what I can see, Jaguar Server is the upgrade that Mac administrators have been waiting for. In almost every way, it is a step up from AppleShare IP, and older versions of Mac OS X Server. The only thing left out is an easy way to set up DNS. That is still a command line exercise, and considering how critical DNS is to Mac OS X Server, (which practically lives on DNS and revers DNS lookups), this is a rather large lapse on Apple's part. There are other tools out there to help you configure DNS, like GUI interfaces available from VersionTracker, and QuickDNS from Men & Mice, but still, for such a core service, Apple needs to provide better tools than EMACS and Pico.
Besides products, the show floor was interesting. There were a lot more smaller vendors in the spaces usually occupied by Adobe, Quark, Macromedia, which is better than some may think it is. The Mac market needs new blood to remain vital, and we can't achieve vitality if we define our expos by a handful of major vendors. I've also noticed, since the release of Mac OS X over a year ago, that the technical side of Expo seems to be flourishing. From the greater professional content in the sessions, to the new hands on labs, to more technical companies coming to the Mac, MacWorld Expo is becoming far more geek-friendly than it had been prior to Mac OS X. Not surprisingly, I think this is a good thing. The Mac market needs more geeks, even if they do sometimes have a bad opinion of home users. More geeks = more toys for everyone, and that's always good.
So, it was an odd expo, but not a bad one by any stretch. The attendance was surprisingly good for the state of the economy, which shows that Apple, and the Mac market, while small, is not nearly as moribund as the Wintel market looks to be. If Quark, and a few other companies can keep to schedule, we may have a really cool expo in January as well. Also, look to Seybold and the Paris Expo in September for more interesting things in the Mac market
Software Update Exploit
created 8 July 2002
Software Update Exploit for Mac OS X...our very first, Mac OS X - specific security hole!
Kind of makes you feel warm and fuzzy inside, doesn't it? No? Good, it should be making you dive for your software update control panel and turning off automatic updates. Go do that now, I'll wait...
Finished? Excellent. So, let us take a look at what the problem is, how the exploit works, and possible ways it could be avoided, both now, and in the future.
Software Update is designed to be a convenience. Simply run Software Update on a schedule, or manually, and any updates that apply to your particular system are downloaded automatically or at your request, and installed. This is not an inherent problem. The problem occurs because of the security in this process. More precisely, the lack of security. It seems that the software update process is an unencrypted HTTP stream from port 80 on your machine to Apple's software update servers. That means that anyone can use a packet sniffer like EtherPeek, (commercial tool) or Sniffles, (freeware tool, but the download site appears to be down.), and track exactly what is going on during a Software Update request session.
But you don't have to do this on your own, (although I did with Sniffles, took about five minutes.) as the person who discovered the exploit has done a quite thorough job of documenting it, and showing you exactly how it could be exploited. Russell Harding discovered and documented the problem on his site at http://www.cunap.com/~hardingr/projects/osx/exploit.html, and I encourage everyone who is reading this article to go to the site and read Russell's work, it's a well - done bit of hacking.
For anyone who still has their system set to auto-update, you should have changed that setting the first time you started Mac OS X. That is the first thing I disable on any system I run. In fact, on any platform, the first thing I kill is auto update. It's a complete mine field from a security point of view, and I've always been a little disappointed that Apple sets it that way by default. I hope that will change soon.
Oh, and for all you Mac OS 9 users who are about to feel all smug and "I told you so" about this, you have nothing to be smug about. I ran Software Update in 9, got the same unencrypted, unauthorized data stream as in Mac OS X. Even worse for Mac OS 9, since it doesn't have any concept of authorization with physical access, if you can run Software Update, you can get just as hosed as Mac OS X. All it takes is a little know-how and some AppleScript, and your system is a brick.
So now, we know what the problem is, but how can you exploit it for good or evil? I mean, only Apple can be the Software Update server, right? Wrong. SInce there is no authentication for accessing the Software Update server, you can easily spoof the DNS name of the server, and now any queries going to the server, go to your box. If you want to hijack Software Update requests on a switched network, you would need to do some Address Resolution Protocol, (ARP) spoofing. Sean Whalen has a good introduction to this in his PDF, "An Introduction to ARP Spoofing". While it isn't a subject that will make you warm all over, it is something that you should be aware of, as forewarned is forearmed.
With these techniques running, all you do is fake up a nice little exploit, give it the correct name, which is easily done off of Versiontracker or Macintouch, MacCentral, MacMinute, etc., as they all give you current names of software updates. Make the read me look good, perhaps fake a nice license agreement, and the clueless user is now giving you their system.
For Mac OS X, Russell has an example of the kind of back door that can be installed, as a cracked version of sshd, the Secure Shell Daemon that allows for remote command line access to your Mac. In his example, anyone who can locate your Mac on the Internet can root your machine with the not-so-secret password, "URhacked!". However, you could also install a cron job that erased your boot and any secondary drives on your system. Or a job could be installed that waited until you had a fast internet connection active, and then would slowly ftp your documents folder to someone else, preferences, email, etc. On Mac OS 9, it could be a small faceless background application, ala the Control Strip extension that would do the same thing on a given date, or even a random date. It could also find out which antivirus software you were running, and fake the preferences so it could disable it without you being warned, although considering the ignorance that Mac OS 9 users display towards security and viral issues, this wouldn't really be that necessary.
However, this can be used for good as well as for evil. If you run a corporate network, you can use this data with a proxy server to redirect all Software Update calls to your own internal server(s), and take control of this process for your network. So there's at least one good thing about this exploit, although I'd be far more sanguine if it was because it's designed to work that way, not because it's so easily hackable.
So now the question is, what to do about it. Well first of all, turn off auto update and auto checking in Software Update. There are plenty of ways to find out when Apple releases an update, including from Apple's support site, that you don't need to be pinging their servers once a week, and hoping you really get their servers.
But that's a short term solution, and not really a solution as well, because if you get hijacked by someone clever, you could still get cracked, if the hijacked update looked good. Unless you are watching the raw ip addresses that your machine is connecting to, and know what they should be, then you're still vulnerable, only now you have to do it manually.
So there's only real way to fix this, and that is to use proper authentication and encryption with Software Update. Now, the quick and dirty way would be to just require you to use a user name and password when you connect to the Software Update server. There's a few problems with this. First of all, the identity of the client, (you), aren't the problem. It's the identity of the server. So proving you're a Mac user to the server does no good. All you need is a cracked server that allows anyone to connect, essentially ignoring the password. You're just as hosed. What needs to be done is to have the server prove it's legitimacy to you.
Luckily, there's a couple of ways to do this. The first, and probably the best, is already built into Mac OS X, and is even set up to be an authentication scheme for Mac OS X. This scheme is called, Kerberos, and was developed at MIT. It's designed to provide security in an insecure world, not just to people outside your network, but to people inside your network as well. Basically, Kerberos is a way to have a person or machine prove their identity to another person or machine, without exchanging passwords over the internet, and on a time - limited basis. (I'm drastically oversimplifying this, but there is an excellent tutorial on it available.) Since Kerberos is a part of Mac OS X, the basics are already there. Apple would have to do some work to get the server to authenticate to the client, but Kerberos support makes this easier. THere is also a version of Kerberos available for Mac OS 8 & 9, albeit not as full - featured as the Mac OS X version.
Another option would be to use SSL certificates. Since like Kerberos, SSL support is built into Mac OS X, Apple could set itself up as a Certificate Authority, (CA) and when you get a mac.com account, you would install that CA into Mac OS X. That way, the only way for a server to send anything through Software Update is to use the right certificate, and SSL encryption for the data stream. (again, I'm really oversimplifying things, but there's not a lot of space for an article on SSL here. For more information on SSL, check out the OpenSSL web site.) Unfortunately, SSL support for this on Mac OS 9 would be a lot more work than on Mac OS X, so I'm not sure how good an option this is on that platform.
So there are ways to fix this, and they are ways that will keep this secure both now, and into the future. The big problem is why wasn't this done earlier? The fact is, Mac users were extraordinarily lucky that this exploit hasn't been used yet, and for once, not being the popular kid is a good thing. But security by unpopularity isn't really security. In a sense, Mac OS X is no longer the innocent OS in a cruel world. This is a good thing, as the earlier reality sets in, the better you can handle it.
This is Apple's first real test of how they react to something like this. How it reacts will determine the tone that Mac OS X will take for security issues for a long time to come. I'm hoping, and betting, like most Mac users, that Apple will react the correct way, and quickly release a patch that fixes this problem, and gets Software Update working the right way, all in one fell swoop. The only other option is for them to react like Microsoft does to Windows exploits, but I'm thinking/hoping they won't. They're far too smart for that...
(Note: This is a very quick look at the problem. As more details come out, I'll get an update out to you.)
No More Spam
created 25 June 2002
On dealing with Spam...
If there is another issue that makes people so angry that they throw away common sense as easily as spam, I don't know what it may be. I can almost understand it, as the constant barrage of idiocy from the people out there who think that I have a really horrid self image, am impotent, without friends, and unable to meet girls, (okay, I'm a geek. Most of my friends live in other parts of the country, and face it, a lot of women don't like being second to a glowing box. To all the women of the world: If you want to be numero uno in a man's world, don't date a computer geek. You may tie for number one for a brief while, but eventually, you are firmly ensconced in the number two slot. That's just the way geeks are.), can't make pancakes or salads without mechanical aid, etc.
So there is the hue and cry for "There ought to be a law...". Well, no, there shouldn't be. I mean really, how do you enforce anti-spam laws on the Internet. When a spam server is in a different country? The answer is you don't, so these idiotic spam laws are just that. Idiotic, a waste of time and money, but they look good on the campaign trail.
Another highly touted option is RBLs, or black-listing spammers. They don't work as well as we'd like them to. Spammers are annoying, but they aren't stupid. They just switch mail servers, mail domains, etc. As well, most of the RBLs aren't the most professional bunch of folks I've dealt with. I don't want idealists filtering incoming mail. Idealists get angry, idealists justify vendettas, idealists are usually incapable of admitting error. I want to do business a bunch of cold-hearted pragmatists who understand things like due diligence, SLAs, allowing clients to see test results, proper notification schemes, etc. All those things that aren't cool, and get in the way of SAVING THE WORLD FROM THE EVILS OF SPAM. You aren't saving the world, you're providing a service. Therefore, you aren't Spider-Man, you're Business-Man...act like it.
So RBLs, while a partial answer, (they do work well when run well, but they aren't the panacea people want them to be), don't eradicate spam. What about anti-spam applications? Well, again, they will work better than nothing, but remember, spammers aren't stupid. They can get a copy of any anti-spam application just like you, and beat on it until they find the weaknesses and holes, and blow right through them. The same goes for anti-spam functionality in email applications. Even fun ones, like the 'bounce' feature in Apple's Mail application end up being rather manual, and time consuming.
Another option that I've seen recently is to change email addresses regularly. This one is probably the worst option of all, particularly for a business, or if you are on mailing lists. It's also temporary. The spammers will find you, they always find you.
But you can't just give up, and spend hours a day wading through spam. So how do you filter out spam? It's really simple. You don't, or at least not directly. See, the problem is, that spam isn't consistent. The headers change, the subjects change, it all changes. Email rules, filters, anti-spam applications don't do well with fuzzy logic. So eventually the spam gets through.
So here's the trick. Don't filter the spam. Filter everything else. The stuff you want to get, or are supposed to get is rather static after all. You know what work email is going to look like, create a work folder, and filter all work emails into that. Mailing lists the same way, friends, family, etc. I've been doing this forever, or at least since Emailer 2. I got the idea after re-reading one of the entries in a favorite book series of mine, The Destroyer. In one of the books, (I leave it to you to figure out which number), a scientist has finally figured out an economical way to extract shale oil. He doesn't. What he does, is find uses for all the non - oil products of shale. So the oil, which is the valuable part, is now a by-product, and therefore costs nothing to get. I realized that this would work for spam as well.
I don't filter my spam, I filter everything else. Therefore, 99% of what is left in my inbox can be assumed to be spam. It takes about thirty minutes a day, tops, to deal with the spam, and that is mostly scanning from headers and subject lines. So, I can just do a big select and delete on almost everything in any of my inboxes. Spam is gone, very little time wasted, and I find that by using filters on my incoming mail, I can be more efficient in dealing with the non-spam email as well.
This approach is more of an aikido-ish approach than a blunt force method. You aren't stopping the spam with a head-on attack. You are diverting the mail you do want, and then gently guiding the spam into your trash. While it doesn't stop spam at it's source, or at the server, it does remove the efforts of constantly tuning anti-spam applications, etc. This also works on any email client that has even rudimentary filtering, which is most of them. If your ISP is using an IMAP server, such as Communigate Pro, then you can easily create server side filters that filter your wanted email as it comes into the server, so when you go to check your email, the filtering is done before the email ever gets to your Mac. Far simpler than creating multi-layered conglomerations of RBL subscriptions, anti-spam applications, junk mail filters, etc.
It doesn't do anything to stop the spammers, but it makes dealing with their effluent far easier, and that's a worthy goal as well.
created 17 June 2002
I'm beginning to think that the crappy part about life is the missed opportunities that you don't get back. (yes, this has a Mac - related point.)
I had numerous chances to see one of the most amazing blues musicians that ever lived. He was going to play once in Fargo, and I didn't have the easy cash for the ticket. "No prob, I'll see him next time". Two weeks later Stevie Ray Vaughan was dead. (He's dead, and Wayne Newton will live forever. Larry Niven was right, There Ain't No Justice...TANJ).
I had a number of chances to meet Don Crabb...he was going to show up at a meeting I organized one year at MacWorld...he never made it, and died before I could meet him...TANJ.
This weekend I find out that Rodney O. Lain is dead. I'm not going to pretend I knew anything about him prior to this other than he wrote for MacObserver. I knew of him, but TMO, as it's also called, just never had any real draw for me. But hearing the news, I went and looked at his work there.
I'm an idiot. An absolute, cross-eyed, drooling fool.
And I'm going to regret not knowing this man a little better for a long time. He was, (from his words), a smartass, eloquent, intelligent, but a smartass. And not afraid to poke sticks at the MacMacs. (MacMac, n. The name created by John C. Welch to describe the mindless drones walking around Macworld Expo sounding vaguely like a cross between a penguin and a duck..."MACmacmacMAcmacmacmac..." also, the name for those people who would buy a gold - plated sock if The Steve wore it.) He had opinions that pissed people off, and didn't really care. He seemed to welcome it, (you don't write a column with that word that white people must never ever use that refers to black people in a really bad way, but black people use it all the time to refer to themselves in ways that range from convivial to derogatory, unless you are planning to make people angry.)
Damnit, I never knew the guy, but I like him a lot now.
I'm not saying that we would have been friends, he may have hated the air I displaced as I ate food that could have gone to someone worthwhile. That's the part that sucks. Here was a guy who, for all the differences in our backgrounds, seemed, at least through his writing, to be someone who had many of the same thoughts and opinions I do. That's rare. (Okay, it's a bit scary too I suppose.) I'm not a guy who says things to please people. I piss people off. I told Phil Schiller once that he was being ignorant. Turns out he wasn't. MMMmm...that was a nice piece of crow I ate when I apologized to him.
But now I'll never know. Nor will a lot of people. See, I don't regret things I've done. I feel bad for some of them, but in the end, my mistakes shaped me as much, if not more than my successes. As a wise fool once said:
"Any man can grin, when his ship's come in, and he knows he's got the stock market beat...but the man who's worthwhile, is the man who can smile, when his shorts are too tight in the seat."
Okay, it's trite, but so are most of the sayings that point out the bleedin' obvious. What I do regret are the chances I bypassed, via stupidity or ignorance. I'll never know if meeting and talking with Rodney would have been good, bad, or indifferent. That's a damn shame, as there are not a lot of people that I would look forward to meeting.
But now there's one less person I'll be able to meet.
As it turns out I did meet Rodney. In San Francisco, during Expo 2000, at the Mac Authors dinner. It was the same event I met Shawn King, Bob Cringely, and Pammy at as well. I was sick with a mild flu, and had a really evil headache, so I don't remember him that well, but what I do remember was a guy that other people who knew him better say was pretty accurate. I remember laughing a lot, the dude was funny.
So at least I met him.
Xserve Part 3
created 6 June 2002
How does the Xserve stand up to the competition?
Well, from what I can see, pretty well. I ran some comparisons with IBM, Dell, HP, Compaq, and Sun. Any PC configurations that show options are to get them to match the Xserve as closely as possible. I didn't order any keyboard or monitors unless there was no choice. If possible, I went for a Linux OS, to avoid licensing costs, since the Xserve has unlimited licensing. All configurations were from the company's web sites, taken at whatever they are offering on 6 June 2002.
For Apple, I configured the following:
Xserve, from the standard Apple Store site:
- Single GHz G4 CPU
- 2 GB DDR SDRAM, 4x512MB DIMMs
- 2 120 GB UltraATA hard drives, 7200RPM
- Dual Gigabit Ethernet Interfaces
- Standard ATI video card
- Ultra 160 SCSI card for an external RAID
- AppleCare Premium Service and Support for Xserve (3 yrs)
- Standard shipping software (no extras that don't ship with the box)
My grand total was US$6199 for a mid-line system, that isn't the baddest server on the market, but will work for a variety of tasks. I would use this with the two internal drives mirrored for booting and swap, and then add a SCSI RAID array for home directories, file storage, etc. That way, only the OS is on the internal drives, everything else can be on a separate box with redundant power supplies, drives, fans, etc. So if I have multiple Xserves, I can more easily move the array to a different server if one goes down.
PowerEdge 1650, ordered from the Small Business area of the server site:
- Single 1.13GHz Pentium III, (Pentium 4 not offered for this box)
- No Rails
- 2GB ECC PC-133 SDRAM, 4x512MB DIMMs
- RedHat Linux 7.2 (avoids Windows licensing costs entirely)
- CD-ROM (matches the Xserve)
- PCI Riser with 2x64bit/66MHz slots (this matches the Xserve's slots)
- Active Bezel option
- 2 73GB Ultra 160 SCSI drives, (The closest to 120GB I could get)
- No on-board RAID controller (the Xserve doesn't have one)
- Dual Channel Ultra3 SCSI card (for external SCSI connections, only one offered)
- Dual Intel Copper Gigabit NICs
- 3Yrs Silver support
Grand total was US$6073 for a roughly similar system. The Dell Silver support looked to be the closest to Apple's support options. Allowing for a similar RAID setup saved me $499, and going with Red Hat instead of Windows 2000 Server with 5 client licenses saved approximately $480. Had I not gone with any OS, that would have shaved $159 from the price. I was unable to tell if the on-board SCSI controller had an external port, so I added the SCSI card. So the Dell is about $126 dollars cheaper.
xSeries 330, from the Intel server section
- Single 1.26 GHz Pentium III (No Pentium 4 option)
- 2GB ECC PC-133 SDRAM, 4x512MB DIMMs
- Redhat Linux 7.2
- No RAID controller
- 2 73GB Ultra 160 SCSI drives
- Ultra160 SCSI card
- Dual Intel Copper Gigabit NICs
- 3YR onsite repair, 24x7, 4 hour response
Grand total was US$7111 for the most comparable system. The 330 comes with dual 100Mbps Ethernet adapters, but I added Gigabit to make the configuration as close as possible. So you actually have four Ethernet interfaces. IBM doesn't offer a second power supply option for the 330, in fact, Dell was the only vendor out of those I looked at that did. The IBM only has two drive bays, so this config maxed out the on-board storage. IBM does have the excellent LightPath system to show you malfunctioning components, so that's a consideration in the price as well. All in all, a solid system, and it was US$912 more than the Xserve. Again, you end up getting some nice features here for that extra money.
lp 10002, from their IA32 section
- Single 1.26GHZ Pentium III (No Pentium 4 option)
- 2GB ECC PC-133 SDRAM, 4x512MB DIMMs
- Redhat Linux 7.1
- No RAID Controller
- Single Gigabit NIC (only one PCI slot)
- 2 73GB Ultra 160 SCSI drives
- no SCSI card (slot taken by Gigabit NIC
- 3 year onsite warranty 4hr response during business hours
Grand total was US$7332 for a system that is quite a bit less capable than the Apple/IBM/Dell offerings, yet was US$1133 more expensive than the Xserve. They also had the most annoying config site...I think the inactivity timer was about two minutes.
Proliant DL360 from their rack/blade section
- Single 1.4GHz Pentium III (No Pentium 4 Option)
- 2GB ECC PC-133 SDRAM, 2x1GB DIMMs (only option)
- SuSE Linux Enterprise 7, (only Linux option, not installed)
- Dual Embedded Gigabit NICs standard
- 2 73GB Ultra3 SCSI drives
- Embedded RAID controller standard
- Ultra3 SCSI card, single channel
- 3YR 24x7 4 hr response On-site service (only 24x7 option)
Grand total was US$8978 for a system that comes with a lot of good features standard, but you are paying for it, to the tune of US$2779 more than the Xserve. The OS options are clearly targeted at Windows or Netware users, with only one Linux option, and no options for installation.
Sun Fire V120 Server
- Single 650MHz UltraSPARCIIi Processor
- 2GB ECC PC-133 SDRAM, 4x512MB DIMMs
- Solaris 8
- Dual 100Mbps Ethernet ports (no Gigabit option)
- Internal UltraSCSII controller with external port
- 2 36GB UltraSCSI drives
- 1YR Platinum support (no 3YR option)
Grand total was US$7827 for a system that loses in terms of drive capacity, network bandwidth, and drive throughput, yet manages to be US$1628 more expensive than the Xserve, and is only cheaper than the Compaq. Sun has a great reputation for reliability and quality, but a US$1600 premium for some pretty sad configurations? Come on Scott, time for a trip into reality.
So, in the end, the Xserve is not the absolute cheapest, but it more than holds its own against the major server manufacturers. Of these, Dell is the only one that doesn't make it's own OS, so each of the companies is pretty similar. I left out the second and third tier manufacturers, because they don't have a similar structure to the companies I did look at.
Wrapping it all up
So, at the end of these many pages, the Xserve, while not perfect, is a heck of a first offering from Apple. It has room for improvement, as does everything, but they hit 99% of the critical requirements for a server all the way around. For most jobs and tasks, I would have no qualms about recommending an Xserve as a serious option in almost any environment.
Xserve Part 2
created 2 June 2002
Managing and supporting the Xserve
While the Xserve is a neat box, it's still just a box. Face it, for file sharing, print sharing, any box can do that. QuickTime/Darwin Streaming Server runs on everything, and should be given away in boxes of Tide. Netinfo is only really useful if you have a Mac - centric network. So, for many basic server functions, simply being from Apple isn't enough. It has to work and play well with others, and more importantly, it has to be easily manageable, and have a good support system.
While Apple has had easily managed servers before, there were some issues. If you went with Apple's management tools, you were stuck in a Mac only world. Apple did provide some web interfaces for things, but to really manage an Apple box with Apple software, you accepted the fact that these boxes were going to be isolated from the rest of your network. If you wanted to integrate your Macs into a heterogenous network system, then you had to use third party tools from Netopia, Neon, Dartware, etc. Even with those tools, you were still isolated in the sense that you had to use these tools as a bridge to other tools such as Unicenter, OpenView, etc.
This was because prior to the Xserve, Apple either ignored standards such as SNMP, or the Simple Network Management Protocol, or they shipped versions that while useful, didn't integrate into the Mac OS well enough to be more than a checkbox filler. Even with Mac OS X, until the Xserve ships, to get SNMP, you have to download it from a third party, Dartware.
This has always been a killer for Apple in the server arena. Hardware issues aside, if you can't integrate a server into your infrastructure, then you are not going to be real enthusiastic about buying them at all, much less in large numbers.
So the management software and remote management capabilities of the Xserve are critical. (Again, this is all based on what little I can get out of the information Apple is handing out prior to the Xserve's release.)
Like a lot of Apple's announcements, sometimes the most important parts get the least amount of play, especially if they aren't particularly flashy, and what could be less flashy than the command line. But for remote administration of servers, it's a critical tool. For example, the AIX boxes I run would be far harder to deal with if I did not have access to SMIT, IBM's excellent tool available with AIX. It lets me do almost any administrative task I need to perform all over a low-bandwidth command line connection. This has been one of Apple's weak areas for Mac OS X Server. While you could use the command line for creating users and groups via the 'niutil' utility, or killing processes via 'top', there were some holes, especially in the areas of installing software.
The Xserve is including, for the first time, an easy way to not only run Apple's installer packages via the command line, but the Software Update mechanism as well. It is also going to let you set system and network preferences via the command line. This type of thing is of critical importance to managing the Xserve. With Mac OS X, installing files has always pretty much been a case of copy the new file to where it needs to be, maybe make some entries in a settings file, and you're done. While Windows requires a far more complex installation process because of that train wreck known as the registry, installing software onto a Mac OS X box is, quite correctly, a simple process. Not being able to do this via the command line has been amazingly frustrating, especially if you are doing installs on multiple machines. Now, you can just pop a terminal window, SSH to the remote machine, and install the software. Since this is just a command line process, you can use AppleScript with it as well, which allows you to more easily integrate remote installs with network management packages, like SNMPWatcher and netOctopus. So with the Xserve, you can do installs of software to multiple machines on your network with a single action or script. This is a serious force multiplier for the overworked IT manager. The command line hooks in the Xserve appear to be improved all over the system, (although printer creation won't make it into the mix until Jaguar Server is released), which will, for the savvy IT manager, make their life much easier, and also make the Xserve far more attractive to administrators of other Unix boxes.
Another item, which gets scant mention in the Apple documentation, but is of great importance to the Xserve's acceptance by the IT community at large is the improved support for SNMP. This is not to say that Apple doesn't realize this, as the inclusion of the HP VP for OpenView, considered to be the leading network management application on the market shows that Apple realizes how important SNMP is. But it's not the kind of thing you can show off in a flashy kind of way. But the fact that Apple is going to be including the Net-SNMP package, which offers support for SNMP versions 1, 2, & 3 shows that Apple is finally taking SNMP more seriously than they did in the days of 8.5, when, although it was useful, and really appreciated, wasn't really a full implementation of the SNMP standard.
With Net-SNMP, and SNMP in general, you not only get support for asking the end node for status, but trap support, which allows the end node to tell the management station that it's having a problem, or some condition that needs to be looked at. This support means that a network administrator can configure the SNMP support they way he or she needs it to be set up, and then the management application will be able to notify the administrator if something goes wrong, or is acting like it's about to go wrong. This allows administrators to be proactive in maintaining their network, and if you have a decent management application, you can be notified via email, pager, etc. (Okay, so giving individual nodes on your network the ability to page you may not be your personal version of a great idea, but users love it.) SNMP covers a lot of potential problem areas as well. If you have a device that is suddenly showing a lot of ethernet interface errors, then you can start troubleshooting closer to the problem. In fact, if the management tool and the device support it, you can even tell if a printer has an open door, and which door is open. Or which input tray is out of paper. This doesn't sound like a big deal if all you have is an inkjet, but imagine dealing with a high-end, high speed printer that is spitting out thousands of sheets a day. Knowing the status of critical components of that printer can be a real time and sanity saver. In addition to the Xserve, you can also download Net-SNMP from Dartware and install it on any of your current Mac OS X boxes, so they all can play in the magical world of managed networks.
So Apple including SNMP with the Xserve is definitely "a good thing" from any point of view. But including SNMP capabilities with the Xserve is only half the story. What do you do with all this capability? How do you make use of this improved manageability. Well, you need a SNMP management application, and Apple includes one with the Xserve, Intermapper from Dartware. Intermapper allows you to use the SNMP features in the Xserve to give you live monitoring of system status, traffic, packet errors, etc. Intermapper has its own custom probes to allow you to monitor your machine via other methods as well. So if you have an email server, you can monitor that server by probing via POP, IMAP, or SMTP. If you have a print server, you can monitor any LPR queues. So not only can you monitor the SNMP functionality from the box, but you can also test the applications and services that box is being used for. You can also integrate Intermapper into higher end SNMP frameworks, as it can send traps to other SNMP management applications. This is the other half of the equation, and Apple is giving it to you for free.
However, as a wise guy once said, "Wait, there's more.." Apple has also included an update to it's server monitor application suite that allows you to keep tabs on an Xserve remotely. Remember, the idea with the Xserve is that it sits in a rack in a server room, and you sit in your nice comfortable office or cube, and use it from there. That applies to monitoring tools as well.
The Xserve already includes the same monitoring tools that you would get with Mac OS X Server, so you can do configuration of users/groups, file & print services, internet services, etc. However, this application doesn't really do much in the way of organized monitoring of your server, and you can't check on multiple servers without multiple logins. Even if you log in, the only thing that the standard Mac OS X Server tools do is give you service status, not hardware status.
To augment this, Apple now has the Server Monitor application. This is a new remote administratration application that allows you to constantly monitor multiple servers for critical hardware conditions, such as drive status, (via the SMART interface), power status, network interface status, temperature and fan status, and if the case has been opened. You can set Server Monitor to notify you via email, SMS messages, etc. You can set polling parameters, and generate an Apple System Profiler report for each server. It appears as though you can set different servers to alert you at different thresholds, important if you have different requirements. (i.e. a DNS server is far more concerned with interface errors than disk errors, vice-versa for a file server.)
Server monitor can be run remotely, but you (obviously) can only run it in Mac OS X. The transport used is an encrypted HTTPS (SSL) connection, and the data format used is XML. This is a good choice, as it allows Apple, if they see a need from their customers, to let third parties write other front ends for this data, so you can use other operating systems or applications to monitor the Server Monitor details.
So, from the pre-release information that Apple is giving out, the management and monitoring side of the Xserve is being taken seriously, as is the integration of those features with the rest of the world. While it eill never be completely dead, Apple's famed NIH seems to have been shown the door in the case of the Xserve.
So the Xserve is doing well with hardware, and included management software, but what about support from Apple? On of the things that has always been a killing strike against Apple servers is the lack of a 'real' maintenance support organization. Things like spare parts kits, 24x7 maintenance, even on weekends, etc.
Judging by the announcements, and if Apple can follow through on them, it looks as though the Xserve user will be able to get the level of support that this kind of box requires. The AppleCare Premium Service and Support plan is a three - year plan that covers the hardware and software you get when you buy an Xserve. The plan calls for 24x7 email and phone support with a 30 minute response time. Now, before we all jump up and down shouting "Huzzah", remember that depending on the contract, an autoresponse saying, "We got your email", can be considered a response. (Yes, I am *extremely* cynical about things like that.) There is also onsite hardware coverage, that is 4 - hour onsite response during business hours, and next - day response during non - business hours. Not as nice as IBM giving you your own engineer in an onsite office, but then Apple isn't selling you a $10,000,000 Xserve either.
So that's not a bad plan at all, but who wants to sit there with a dead box waiting for the Apple person to show up? Not many IT people. So that's where a very new part of the Apple support offerings for the Xserve comes in, the spare parts kit. In AppleSpeak, it's the AppleCare Service Parts Kit for Xserve. Customers can buy this from apple, and it doesn't require certifications or special tools. It's a motherboard, a power supply, a fan, and a drive module with either a 60 or a 120GB drive. The only thing it doesn't have is a CPU, or a PCI card riser. So you can replace most of the parts that are going to fail the most without having to call Apple. There is no details as to what happens with the dead parts, i.e. do you send them back to Apple and get a new one? If so, is it a free swap, is there a fee, do you pay full price? Depending on the cost of the kit, I would expect that the swap should at most cost a nominal restocking fee plus shipping and handling. Doing a free swap is obviously better, but since we have no real details yet, that's the part of the deal that has to remain a question for now.
It looks as though Apple has reorganized the iServices people into the Apple Professional Services, which is specifically designed to help you support your Xserve, and provide integration services to help your Xserve slot into other networking structures. It *looks* as though outsourcing and other consulting services are being provided as well, but I haven't looked into the details on Apple Professional Services, so you may want to contact Apple on your own to get more details.
Finally, you can still buy the various Professional SupportLine Tools, and Professional Support Tools from Apple so you can do most, if not all of your support in - house if you so desire.
So it looks as though Apple is taking the support end of the Xserve seriously. This is good, although when it comes to support services, I'm going to withhold final judgment until I can actually see how good these services are in implementation as opposed to on paper.
To wrap up this series, I'll be taking a look at how the Xserve compares to offerings from other server vendors.
Xserve Part 1
created 24 May 2002
Xserve, the start of a new day at Apple?
Since the announcement of the Xserve on May 14th, the XServe has been one of the most talked about and analyzed Apple computer since the iMac. With good reason. While the Xserve is not Apple's first "real" server, it is the first one that will run an Apple - created operating system, namely Mac OS X Server.
The Xserve hardware, while not the ultimate small server setup is, for what is really a first effort, solid. The Xserve is a 1U rack mount server, so it's the first server that Apple has produced that is designed to fit into a rack, and not take up one iota of space that it doesn't need to. Up until now, the only way to rack mount an Apple box was to use a third party kit, from either Marathon Computing or GVStore. Now, Apple gives you one with the pretty Apple logo, that turns out to be a very respectable box.
Now, while I'm going to try to take a decent look at this box, understand that this is all based on available information, which isn't much. So if Apple changes something prior to the actual shipping of the Xserve, or better documentation points out something I missed or got wrong, remember that I warned you.
A lot of people, including myself, are debating as to if Apple is targeting the enterprise market with the Xserve. My answer to that question depends on how you are asking it. If you are asking if this is an enterprise IT server, designed to directly compete with 1U offerings from IBM, Dell, HP, and others, I'd say no. As we will see, there are certain features missing from the Xserve that an enterprise server really needs. If however, you are asking "Can the Xserve have a useful place in an enterprise IS organization?", the answer is an unqualified yes. My reasons for this opinion will be apparent as we look at the Xserve.
Apple is limiting the CPU in the Xserve to its 1GHz model. You really only get one decision on the CPU, how many, and you have to make that at the time of purchase. In other words, you have to make a rather permanent decision before you get a chance to benchmark and test the machine. This is a bit of a problem, as a lot of server administrator types, including yours truly, prefer to have some upgradability in this area. Often, we just don't need the second CPU, and use the savings for other things that we need. Being able to buy a second CPU later would make this decision a tad less stressful. However, looking at the pictures of the motherboard that Apple has provided, I can see why you aren't able to do this. Apple is using a single - card design that precludes the upgrading of the CPU, as the card has the CPU(s) permanently attached. Okay, so it really doesn't, but building an upgradable card, or using separate sockets for the CPUs would drive up the cost of the system, and Apple is uniquely vulnerable to price comparisons.
By comparison, other vendors in the PC space allow you to purchase additional CPUs separately, (in the case of Dell, for $800). This is not an earth - shattering problem by any means, but it is something Apple should think about for a future revision of the system.
Each CPU has a nice, fat L3 cache on a 64 bit bus. While I haven't found specifics indicating Xserve - specific changes, the cache throughput for the Quicksilver GHz model has the same quoted speed as the Xserve, so it is reasonable to assume that the L3 cache architecture is identical. (Apple's desire to simplify inventory wherever possible makes this a high probability as well.) If this is the case, then the L3 cache is running at a 1:4 ratio compared to the CPU, ala the G4 tower.
A more interesting part of the Xserve hardware is the new system controller that allows for the use of DDR SDRAM, the first use of this type of RAM in an Apple system. (In a nutshell, whereas the standard SDRAM you use in a G4 tower allows for one memory operation per clock cycle, DDR allows you to get two memory operations per clock, doubling your raw RAM speed without having to change your motherboard speed. This increases memory performance without having to majorly redesign the rest of the motherboard.) This new controller also allows for improved direct memory access, (DMA) from PCI and other I/O components.
This is important for a number of reasons. First, one of the things that has really hampered G4 Mac performance in comparison to G3 Mac performance has been the memory. The G4 can handle memory I/O far faster than the G3 can, yet has been using the same memory running at the same speed as the G3 machines. So, for memory intensive operations, the difference between the two machines were minimal. With DDR RAM able to pump data to and from the CPUs twice as fast with the same motherboard speed, the G4s in the Xserve will not be waiting on RAM operations nearly as much as in the G4 towers, etc. The improved DMA, (which allows devices to access main memory directly) for PCI devices, hard drives, etc., means that they can communicate with main memory faster than in earlier models.
The big problem with the RAM in the Xserve is what it is not, namely error-correcting ECC RAM. ECC is available in DDR as well as standard 133, so Apple could have done this. However, it's more expensive than DDR, and if your implementation of the ECC capability sucks, then it's less of a great thing than you think. For example, if you use IBM's Light Path idea, which uses LEDs to light the way to the component, then you have a good implementation of the technology. Of course IBM has clearly marked motherboards, easy access to parts,etc. If your motherboard design is horrid, like the Compaq 1600R I had the extreme displeasure to work on, then, while ECC may expedite troubleshooting, you aren't going to be terribly happy with the process. Doing it right takes time and drives the cost of the system up, (again, no matter how good an overall value, if Apple is more expensive by a dollar, they get hammered for it.) But I would much rather wait for an IBM-class implementation of ECC than a Compaq-class one. Simply having a feature isn't enough, it has to be done correctly.
The PCI slots are all 66MHz, 64-bit slots, as opposed to the 33MHz, 64-bit slots in the G4 towers, which allows for improved data rates in the PCI subsystem for cards that can take advantage of it. (If the card is a 33MHz / 32-bit card, the new bus won't magically make it faster, although the DMA improvements should still help.) The Xserve comes with 2 PCI slots and one shared PCI/AGP slot. The PCI/AGP slot is half length, the PCI slots are full - sized. Normally, the PCI/AGP slot is used for a second Gigabit interface, and the VGA card goes in one of the PCI slots, which still leaves one PCI slot for a SCSI or Fibre Channel card. One advantage to using a separate Gigabit card is redundancy, in that if you lose one interface, the other is still up and running. (One of the "advantages" often shown for the Dell rack mounts is that they have two interfaces on the same card. This is usually done by the same people who go on and on about how redundant power supplies are just critical because you can't have a single point of failure. So why then would you settle for a single point of failure with your network interfaces?)
These I/O throughput improvements are critical for a server, where throughput, and not CPU gigaflops, can be, and often are, the most important numbers. I/O improvements should help with one of the more consistent complaints about the G4 towers, and that is their Gigabit Ethernet numbers. By improving on, or even replacing the Key Largo and Uni-N controllers in the G4 tower, the Xserve will be a system that can really take advantage of the speed of the G4 and the I/O capabilities of Gigabit Ethernet.
The Xserve's I/O improvements extend to the disk subsystems as well. Apple has made a somewhat controversial choice by going with ATA instead of SCSI as the primary disk bus for the Xserve, given that SCSI is the traditional disk subsystem for 'real' servers, because ATA 'cannot handle' the kinds of disk access that SCSI handles with ease. Let's kill a misunderstanding right now. This is a bus limitation, not a disk limitation. An ATA drive can handle life just as well as a SCSI drive. The drive is a drive, it does two things, writing data and reading data. It's the bus that the drive connects to that determines usability in a given situation. ATA as a bus technology is simply not up to the job of the kinds of simultaneous access that SCSI is designed to handle. But ATA drives are perfectly able to handle large-scale data needs as well as anything. The trick is to not have them on an ATA bus. Normally this is done by attaching ATA drives to either a SCSI or Fibre Channel bus, and combining the price advantage of ATA drives with the server-class capabilities of those bus technologies. Apple is also finally making use of the SMART electronics in the drives that allow you to monitor the drive's status, power inputs, etc. This ability to spot a drive on the way down, before it crashes, is critical to any server implementation, and one of many signs that Apple put thought into function, not just form.
For the implementation of the ATA buses in the Xserve, Apple has taken a different route, which works around the limitations of ATA in an interesting way. They simply put each of the Xserve's main drives on its own bus. This way, you get full ATA performance, (which, in this kind of configuration is pretty darn good), without the bus problems with multiple devices that make normal ATA configurations suck so badly for servers. (As far as Apple's quoted speeds and their comparison to SCSI, well, just remember that while numbers don't lie, they also don't tell you much without some form of analysis, and since humans do the analyzing, you are dealing with the spin of the human doing the analysis, so keep the salt handy.) What Apple has not done here is provide a hardware RAID for the Xserve's ATA drives. They do provide software striping or RAID 1 (mirroring) for the Xserve, so you can have the speed increase that striping provides, or the reliability increase that mirroring provides. (Yes, I know that striping is known as RAID 0. Well, striping has no redundancy, and in fact is no more reliable than a single drive, as if one drive in a stripe set dies, all your data just went away. So at best, it's AID level 0, and is eventually going to kill your data.) Software RAID has a generally worse reputation than hardware RAID, and not without reason. My own horrid experience with Windows NT's software RAID ensured that I will never use any version of Windows' software RAID ever again. On the other hand, SoftRAID has never given me any problems, so I guess I'm even there.
In any event, hardware RAID does have some advantages over software RAID, primarily additional RAID levels, and other features. I've had hardware RAID go bad on me just as fast as software, so a crappy implementation of either will cause you just as many problems. If you are really that concerned then use an external hardware RAID with a Fibre Channel card in the Xserve. This way you get speed, size, and greater flexibility. This is what I do, as I find even with hot - swap drives, I still get better implementations with a dedicated external RAID than with an internal one, hardware or not. The point here is that the Xserve has a solid disk subsystem that works for what you tend to use 1U servers for. (Face it, you aren't using a single 1U server to run a terabyte Sybase database. you're using a couple of racks worth of Xserves, and external RAID anyway, so most of these arguments on the Xserve are interesting only if you like debates about angels and pins.
Since the drives are tied into the I/O improvements in the Xserve, you should get sweet performance out of them, regardless of bus technology. Again, while an innovative use of ATA, it's not a traditional enterprise implementation of disk storage in a server. However, I can easily see this system fitting in to an enterprise. The drives are housed in hot-pluggable drive modules that allow you to remove them without shutting down the machine or removing it from the rack. This is an important feature for any server, and Apple seems to have done the enclosures right. The only problem here is that they have, at least for now, decided not to allow you to purchase the drive modules without a drive in them. This is a bit silly, and I hope Apple will change their minds on this. This allows administrators greater flexibility in the initial configuration of the Xserve, which is an important consideration when you are deciding whose 1U to buy. Face it, for most server tasks, a box is a box is a box, and little things like this make the difference between which vendor an administrator decides on.
Apple has made a number of very sensible decisions with the Xserve, some of which are long overdue. First up is the ability to run the Xserve headless, or sans monitor, without needing third party VGA plugs that loop back the video signals. Servers don't generally need top-end display cards, and in fact, many manufacturers allow you to forego a video display system altogether. Apple doesn't go that far, but you can use a fairly average ATI card that allows you to run headless, but can autodetect a monitor if attached for troubleshooting, etc. Like everything else Apple does, this decision is being debated endlessly on web sites and mailing lists at all levels of the Mac community. My take on it is that Apple is not yet ready to be an enterprise vendor. They are however, a multimedia computing vendor, and the Xserve is, among other markets, nicely targeted at someone who wants a rack mounted system for Final Cut Pro, or Maya. These applications, and their users, need solid video display systems, so Apple walks the middle road. You have to have a video card, but not a high end Radeon or Nvidia card, and you don't need a monitor. This takes care of the biggest gripe in this area, so overall it's a plus.
Another long-awaited feature for Apple servers is a DB-9 serial port. Yes, we all know that the serial port is dead. Well, kind of. There are two areas that it is most definitely not dead, and they are UPS connectivity and troubleshooting consoles. While the major UPS vendors like APC are gradually getting around to using USB for the console connections to their UPS products, serial still rules the roost here. Really, there's not much of a need for USB for this task. UPS - computer communications are very simplistic, and you don't tend to hot-swap UPS's a lot. (These things weigh a *ton*, you aren't trucking them hither and yon). By including a DB-9 serial port, Apple allows the Xserve to more easily communicate with the vast majority of high end and rack mounted UPS systems, making it a more attractive system. The other reason for a serial line is troubleshooting. While you aren't going to cart a monitor around when you have a system drop off the network, most server administrators have something that can act as a serial console, and there are a few portable serial consoles designed for this very task. So, by adding a DB-9 port, Apple makes it far easier to use the Xserve with common APCs, and should allow administrators with a laptop with a serial port, or a console to troubleshoot an Xserve without needing a monitor, even if the network connection(s) are down.
The third FireWire port on the front of the box is genius, at least in my eyes. While you can boot from a CD-ROM, being able to boot from a FireWire drive, or (if enabled), to boot the Xserve in Target Disk Mode, and perform actions on it from a hard drive is far more preferable. This is also a nice way to get data to/from the Xserve fast, especially if you can't afford a Gigabit switch. While you can use the FireWire ports in the back of the Xserve for this purpose, the one in the front just makes it far more convenient. Outside of IT Geek needs, if you are using the Xserve for video editing, the front port makes it a lot easier to just dump video data right from the camera into the Xserve if you want. Like the serial port, this is a feature that has no downside.
The rest of the ports are USB ports, and as KVM companies are running to USB as fast as they can, this is just the only logical way to do this. I wouldn't have minded a USB port up front as well, but that's a minor issue.
The Xserve comes with only one power supply, which has been pointed out as a critical failure on Apple's part. But when I did some checking of 1U servers from Sun, IBM, Dell, HP, and Compaq, I found that only Dell offers dual power supplies in a 1U form factor. I'm thinking that if redundant power supplies were critical in the 1U form factor, then at least IBM would have them as well. So, if redundant power supplies are the critical requirement, then Dude, you're getting a Dell. Don't get me wrong, redundant power supplies are good to have. But one of the ideas of a 1U server is that you can get multiples and use the redundancy of multiple computers to increase reliability. So if you have two Xserves set up correctly, then you have better redundancy than you would get from a single motherboard with two power supplies.
For temperature control, always critical in a tight form factor, the Xserve contains three fans, one in the power supply, one cooling the CPU card, and the last one cooling the PCI slot areas. The non-power supply fans are replaceable without major component surgery. While I obviously cannot tell from pictures, etc., how hard it is to swap out internal components, judging by the layout, it looks as though you can do it without major effort or blood loss due to badly milled metal bits.
Externally, the Xserve is a reminder of why Apple is the innovator in industrial design. Not only does the case look good, but it's quite functional. The front has clear indicators for power, case lock, network links and CPU activity. The drive bays all have LEDs to show you basic status. As I mentioned before, there is a front-mounted FireWire port, invaluable for maintenance, or media transfer where a network connection is not the best option. The CD is a tray loading CD, (if you are running headless, it's a tad difficult to tell if a slot loaded CD-ROM drive has a disk or not.) Some folks have complained that the Xserve should have a DVD-ROM at least, but I have yet to see a pressing need for it. An option for a DVD-R drive would be a better idea, at least for the video creation crowd, but it's not a critical lacking. The nice big thumbscrews to secure the Xserve to the rack are greatly appreciated, at least by this administrator.
The back side of the Xserve, while more utilitarian, still shows the attention to detail that Apple wins awards for. The ports appear to be easy to get to, easy to access, and not so crowded that you can barely get your hands in to swap cables. I especially appreciate that the connectors that would use screws to secure the cable, the DB-9 serial port, and video out port is isolated enough that you can actually use a screwdriver or thumbscrews without wishing you had an eight year-old child handy, just for the smaller fingers. The hex-wrench lock is a good touch, although I'd like to see a mount on the box for the key. I think in the bowels of every network room is a small alcove with hundreds of those silly things. But being able to lock your servers so the drives can't just be popped out is a good thing, so I'm not going to complain too much about how Apple chose to implement it. All in all, the case design is solid, and well thought out.
So, how does the Xserve stack up as far as enterprise hardware features go? Quite well actually. It is only lacking in a few areas, namely ECC RAM, on-board RAID/SCSI, and a redundant power supply. But remember, Apple is not an enterprise vendor, and this is their first shot at a rack mount server.
While Apple could have used ECC DDR, that would drive up both the cost of the motherboard and the RAM. I've seen anywhere between a $50 to a $150 per-module premium for ECC DDR over DDR. As well, from what I could tell on various vendors web sites, Apple is, (at least the day I looked), the only vendor using DDR in a 1U unit. So the question tends to be, is ECC a hard requirement for you? If so, then you don't want an Xserve. If you need fast memory throughput in a 1U unit, then the Xserve looks more attractive.
The use of ATA for the Xserve drive subsystem is an obvious cost decision, and one shared by the Sun Fire V100. Using SCSI, and a hardware RAID would simply drive the price up. The Xserve does win in drive capacity, as it is the only 1U unit to allow for four internal hot-swappable drives. The lack of an internal hardware RAID is somewhat disappointing, but then if you want real redundancy in a RAID, you would use an external SCSI or Fibre Channel system anyway. As well, in my experience, software RAID for mirroring in the Apple world has a far better record for me than in the Windows world. So while I would like to see a hardware RAID in the Xserve, but it's not a deal-breaker for me. If it is for you, then you want to look elsewhere.
The lack of a redundant power supply has been, in my opinion, overblown. While a nice feature to have, again, I was only able to find one vendor, namely Dell, who has a 1U unit with dual power supplies. If 1U units with dual power supplies are that critical, I would expect that it would be standard across more companies than just Dell. Again, it's rare that you buy a 1U server as a single, standalone box. They tend to be bought in groups of two or more, and you get your redundancy that way.
The case design is minimalistic without sacrificing functionality, and shows some nice touches that are often missing from PC vendor's designs.
So from a hardware point of view, the Xserve has room for improvement, but it's still a solid box for a first effort. As well, the thing hasn't been released yet, only announced, so there may be room for minor improvements before they actually ship. There is precedent for this, i.e. the original iMac's modem.
Next up, a look at the Xserve's server management and support options.
Snow Airport Base Station Review
created 29 April 2002
Time with the uber-pod,
living with Apple's latest Airport Base Station.
Well, I've been using one of the new white Base Stations for about three months now, and I have to give it the best compliment I have for a piece of network gear...I forget it's there. I haven't had to futz with it, restart it, change some setting, etc, since I set it up. It's been a beautiful thing.
Just as some background, in my house are currently five computers...a G3 All in One, a 1999 PowerBook G3, aka Lombard, a Blueberry iBook, a TiBook, a HP Pavilion, and a dual 800MHz G4 tower. THe iBook is exclusively wireless, the TiBook uses both as I need to. The rest are all wired. Internet access is via cable modem (DHCP) to an Asante FR3004, which handles the wired connections. The Base Station's 10Mbps line is connected to the Asante via a crossover cable, and the 100Mbps line is used by my TiBook. (Hey, if I'm setting up the network, I get to pick what I use.) Airport to Ethernet bridging is turned on, DHCP and NAT are turned off, as I use static IPs and the Asante takes care of NAT.
This is actually how I've set a lot of base stations in corporate networks, so it's a familiar configuration for me. As usual, the Apple software in Mac OS X, (Since I don't do Mac OS 9 anymore, you'll have to look elsewhere for information on how the Mac OS 9 software performs) is top notch, and easy to use, (well, if you get networks. If you don't know anything about them, then a lot of the terms are going to be confusing especially NAT and port translation, etc.) Changing the pertinent settings was about five minutes of work, restart the Base Station, and you're up and running. With the addition of the iBook, I wanted to see if there were any settings I needed to change, (it had been about two and a half months since the last time I have used the software, so I had forgotten a few things.) I connected to the base station, it told me I needed to update, I did, it restarted, I was done.
There has been some controversy over the claim that the base station has a 'built - in firewall'. Well, it depends on your definition of a firewall. At the most basic level, you can hide your network from the outside by using Network Address Translation, or NAT. So you can create your own private network. You can selectively block ports or allow different ports to only be accessed by specific machines. In that sense, it's a firewall. Now, if you compare it to something like Brickhouse, Firewalk, IBM's firewall, hardware firewalls...well, no it isn't. It's a good start, but if you want commercial firewall features, you have to go somewhere else. So Apple isn't being incorrect here, they're just using a very basic definition of what a firewall is.
The new Base Stations also support 128 - bit Wireless Equivalency Encryption, or WEP. Now, if you've read the articles, cracking this is about ten minutes unless you really know what you are doing. Then it's about five minutes. WEP is a nice idea, bad implementation. It's better than nothing at all, and it will keep the ignorant out, but just making your network hidden so you can't just browse for it will help there too.
Apple did include support for RADIUS in this base station, for added security. Unfortunately, I can't tell you about the RADIUS aspects of the white Base Station, I don't have a RADIUS server at work. although if the RADIUS code works like the rest of it, I don't see any problems. There are some downsides, nothing major though. I still can't set SNMP settings, although the base station supports it. I realize that SNMP is not an issue for home use, but an advanced mode giving me access would be a big help for stations on a network. (The Java configurator application, which worked quite well on the old model base stations had problems with the password, so I was unable to use that utility for SNMP.) The problems with a lack of SNMP settings support is that SNMP is an outstanding way to spot a wonky base station, far better than waiting to hear from the users. Another thing that I personally wish for, although it's not a big deal is more ethernet ports. Two are better than one, but four more would be even better, although that would necessitate a form factor change. In any case, it's not that big a deal, except on avoiding setups like the one I use.
Quite frankly, the worst part of using the base station is going to be giving it back. Yes, I have an Asante wireless router, with more ports, and just as many features, but it's more of a problem child, although with every firmware upgrade, it gets better. (I'll talk more about my Asante in another column.)
In the end, for my experiences, it's a fantastic bit of hardware. It has some limitations that mean you can't use it as your only router, and setting SNMP variables is a pain, well, impossible at the moment, but with those caveats, you should be quite happy if you get one.
created 4 April 2002
The Emperor's not naked, but the bell bottoms have to go,
Six months with a TiBook
Well, in October of 2001, I finally upgraded from a Lombard PowerBook G3 to a TiBook 500. Less than a month later, the TiBook updates came out.
(Here's a tip, If I buy it, it's about to get revised or discontinued. Bought a IIci less than a month before it was discontinued. Bought a Performa 600 the same week it was discontinued. My 3400? two weeks later, the first PowerBook G3 came out. My Lombard? Well, that was the longest, I had it for almost nine months before the Pismos came out. The aforementioned TiBook, and I bought an iPod three weeks before the 10GB models were released. Life in the computer lane...)
As far as specs go, I have a 500MHz TiBook, 640MB of RAM, a 30GB hard drive, Airport card, Mac OS X 10.1.3, Mac OS 9.2.2. Ever since the native version of Adobe GoLive came out though, I'm almost never in Classic, and even less so booting into 9.2.2. In fact, once the native version of DiskWarrior is released, the only thing I'll need 9 for at all is eFax. Thanks to EV Nova's release, I don't play Snood anymore.
When it comes to speed and any computer functions, it's fantastic. It's more than fast enough for my use, (how fast does it have to be to run SNMP monitoring software, BBEdit, and Word?) It starts fast, it functions well. 99% of any speed issues are due more to threading limitations in Aqua than anything else. I regularly have seven to fifteen applications running, and it just cranks.
The only games I really play are EV Nova, a little Unreal Tournament, (and quite badly too), and Baldur's Gate II, (with The Darkest Day mod.) For these, the TiBook works really well. Good sound, i'm happy with the video speed. I could be faster, but
Network performance is excellent as well, at least wired. As far as AirPort goes, well, lets just say that the antenna placement...well...sucks. In all seriousness, for maximum RF reception and transmission levels, you want the antenna up high, and as unobstructed as possible. The TiBook antennas are down low, very close to the human, and have a very narrow reception angle.
This is the first area where Apple really backed itself into a corner on this design. Ideally, you want the antenna in the screen area, for maximum exposure, but then you have to have some RF - transparent material in the lid. If you want that, and some measure of toughness, you're stuck with polycarbonate, or graphite composites, and you lose the whole TiBook 'look'. Now, (and I am not telling you how to do this, nor recommending it, so if you try it on your own, and break your TiBook while voiding your warranty, don't get angry at me. I know how to do this, and it's not worth the trouble), if you were to run an antenna lead through one of the hinges, along the back of the screen, and create a loop of the proper size in the Apple logo on the lid, that could work. But you'd have to be careful not to short the lead to any of the screen components, and this is quite thin wire I'm talking about, so messing it up would be easy.
The other option would be for Apple to use the entire case as an antenna, and electrically tune the case as needed. Unfortunately, this involves running current through the case, and is a really bad idea for a laptop. Works great on airplanes though. In the end, short of a major screen area redesign, the TiBook is always going to have really bad RF reception.
This brings me to the worst part about the TiBook. The case. It's sleek, pretty, lightweight, and incredibly annoying to use. Starting at the front, the latch is just amazingly wonky. It's a nice idea, but if you tend to close your case slowly, as I do, then the little drop down latch comes out a tad too early, and you end up jacking the lid up and down to get it closed properly. COnsidering the speed at which Mac OS X sleeps and wakes, this creates some interesting effects on things.
The entire case is just a tad too flexible for it's own good. Even with the battery fixes in place, if I have a finger in the wrong place, boom boom, out go the lights. Again, this is not something that a plastic shim is going to fix. The case needs stiffening, that's a major redesign.
The reason I have problems with the latch is because I am babying the hinges. Again, they're sleek, they have a gorgeous form. But as we have seen since the release of the TiBook, they are a major weak point for this machine. So I'm rather hesitant to close this lid with the same speed or enthusiasm that I used for my Lombard or the 3400. While I understand that they aren't supposed to be used in an overly aggressive fashion, they are a critical feature, and one of the few mechanical parts of the TiBook, so they should be a tad over built, not under built. The hinge design is also responsible for the worst parts of using a TiBook, and that is accessing the ports in the back.
The port area is so thin, and the hinge overhangs just far enough to make using them regularly almost painful, and if you have to unplug modem cables a lot, then it's literally painful. Unless you have a four year old handy, their fingers are thin enough to fit in there properly. If you place the TiBook on a desk, near the side farthest from you, and then manage to open the port cover, which is not as easy as it should be, you can't see the port ID symbols. So, you either have to lean waaaay over to see them, or do it by feel. I spent many years connecting cables on aircraft, so the latter method isn't too hard for me, but it's annoying, it's stupid, and it screams WinTel. You almost can't use an Ethernet cable with a cover over the latch on the connector, again, because of the stupid overhang. Quite honestly, the only reason I went for a TiBook instead of an iBook is because I need dual monitors. The extra speed is not worth this thrice - daily cable struggle. I use my laptop as...well...oddly enough, a *mobile* computer. Unfortunately, the TiBook is just not really well - designed as a mobile computer. Which is just silly, considering who makes it. And if I could get a dual - monitor iBook, I'd trade this thing in a heartbeat. The new iBooks are a perfect example of how a laptop case should be designed.
So, in the end, the PowerBook G4 is a great engine in a really mediocre body. Quite frankly, unless you need a G4, PC Card support, or dual monitor support, get an iBook. They're cheaper, better - designed, and far less annoying to deal with.
created 26 Feb. 2002
So a while back, I wrote a reasonably well - received rant about installers, and bad habits of installers. Having done that, I thought I should point out some people who are doing things right, and people who are not.
One of the companies that is doing things the right way, and this is not a surprise for them is the Omni Group. Drag and drop is the method here, and they are one of the companies that really takes advantage of Mac OS X's bundles for application distribution. For example, in Omniweb, all application - specific files are in the application bundle, including their crash reporting utility. This makes moving Omniweb to a different drive, location, etc., quite simple. OmniGraffle is the same way. On my TiBook, the only external files are my custom palettes.
Microsoft Office v.x is another example of good installations. The base install is again, drag and drop, with the value pack being more of a traditional installer. All the files related to my use of Office are kept in my home directory, or in the application's folder. Nothing in my /System or /Library directories.
That's not to say there are never circumstances that would warrant installing files to /Library. For example, Adobe installs quite a few items in /Library/Application support that would be needed by almost any Adobe application. This is a good practice, as that way, common files are made available to applications that need them. In my home Library/Application Support, I see that Illustrator sets up a plug-in directory that allows me to have user-specific plug-ins. Again, this is a good practice, especially when you are using Mac OS X in a design house, where you may have people who need custom plug-in configurations. Just put the user specific ones in home directories, and that way everyone has a common base configuration, along with their own custom additions.
That's not to say Adobe is winning any prizes here. The initial installer for GoLive 6 fell back into the "Must quit all applications" nonsense from Mac OS 9. Even worse, the silly thing didn't look at the processes they were killing. So they just killed anything owned by the currently logged in user that qualified as an application. Including background applications. Like loginwindow. Which is the root process for local logins on Mac OS X. And if you kill loginwindow, you are logged out of Mac OS X. For this, Adobe gets, a boot to the head.
MIcrosoft gets a boot to the head here as well for the Office security update. If you need to have all Microsoft applications quit, then have the updater do so, after warning the user that you are about to do this. There is no reason to need this, not tell people, and then have the installer just sit in Neverland until the user figures it out. Another boot for allowing non-Microsoft applications to traumatize the installer.
However, when it comes to just plain annoying and sometimes destructive installers, there is one target, one virtual black hole for boots to the head...Apple.
First, you have the iTunes 2.0 installer. How hard is it to not assume what people are going to name hard drives and partitions? This is, after all, the company that made this not only easy, but almost de rigueur by making the default name "Macintosh HD", thereby begging people to change the name to almost anything else. The same for Mac OS X, I mean, calling the drive, "Mac OS X" is a huge sign post that says, "Hey, rename me!". How could anyone write such a silly installer is an question that I cannot answer, but for causing mass trauma to data, iTunes 2.0 gets Apple a boot to the head...a sweaty one...with chains...and spiked heels.
But that's not all we are going to upbraid Apple installers for. It is true that iTunes 2.0 was the most destructive one, but for sheer consistent, nigh-constant annoyance, the winner is...any OS update installer.
I have, like many Mac users, my own...unique file system. I have folders inside of /Application that have names like "Communications" and "Multimedia". So things like Mail, and Image Capture, not surprisingly, get put in those respective locations. Unfortunately, either Apple's installer is just completely unable to look for a file, or the people writing them are kept in a dank hole, far from any actual Mac users. Either is equally possible. There are at least three programmatic ways to find a file in Mac OS X. You can use the appropriate APIs, you can use the Unix 'find' command, you can even use AppleScript. So why cannot anything get updated if it is not precisely in the same location that the initial installation of the OS placed it. It's not like this is impossible. When I updated my copy of ConceptDraw Pro, it found the things it needed, even though the were not directly in /Application. LimeWire uses InstallAnywhere, which has the sense to ask me where I want things to go. Toast Titanium, from Roxio, was able to handle my odd locations for Toast. Installer Vise from MindVision, InstallerMaker from Aladdin, InstallAnywhere, (which is a JAVA - based installer, so it's not even a mac - specific product!), they are all able to handle this correctly.
Why can't Apple? It's not like they don't know Mac OS X well enough. Come on Apple, Adobe, Microsoft, and all the other companies that are writing buggy/bad installers. This is not hard. It's just tedious. But as I said before, one of the very first experiences anyone has with your product is the installer. If you cannot take the time and effort to create a good installer, what is that telling people about the care and time you put into the product they are installing? Is that really the message you want to send?
Almost forgot...for annoying OS update installers, Apple gets a closet's worth of boots to the head.
The Xmas Geek
created 21 Dec. 2001
So with Christmas upon us, (regardless of religious beliefs or lack thereof, the season, and the day have taken a life of its own, beyond any one belief's traditions, and it is in that sense that I refer to it), it's time for some thoughts on being a Mac user.
Now, a lot of folks talk about the Mac community, like we are one homgenous mass of fanatics wearing Steve Jobs underoos. I agree, that to many folks, the logic behind camping out in na mall to wait for the opening of a computer store is...missing. But we aren't. We are, in fact, a fractious, squabbling lot. We argue with ourselves as much as we argue with outsiders. There's the hardcore fanatics, and the folks who like their Macs, but they put them down occasionally. There are the people who never customize their machines, and the ones who disassemble the case so as to give it a really cool paint job.
In other words, we're like any other group. It's only when you compare us to Windows users, that we stand out. There is something different about Mac people, and it has nothing to do with the large artist contingent. It has to do with a machine that smiles at you when you start it, and says "Welcome to Mac OS" instead of "Starting Windows". It's about people not only caring about the interface, but making the interface the identity of the computer, because they realize that it will become that anyway.
It's about the fact that in the middle of a highly technical developer conference, you have a "Stump the Experts" panel, which gives away prizes from LCD flat panel displays to Apple ///'s, and the moderator is wearing a "Cat in the Hat" hat. And this contest has been happening for ten years. Or one of the OS developer managers happening to wear leather pants one year, and having that become a tradition. (Only in the Mac community. Can you really see Steve Balmer in leather pants? Would you even want to?)
How many other computers have users like Sinbad, who, in addition to being just an amazingly funny comic, is a multimedia geek? Or Andy Ihnatko, who shows the power of AppleScript, not by creating some banking system, or a network monitoring application, but by cobbling together some motion sensors, a "Darth Vader" bank, and a digital camera, all to keep the cat off his laser printer? Any computer company can get some famous twit to stand up and say "I use (computer name here)". But how many of them get Tom Clancy to not only love them, but to come up with one of the best quotes about the Mac that has ever been said:
"Never ask a man what computer he drives. If it's a Mac, he'll tell you. If it isn't, don't embarass him."
That strikes to the heart of the matter. Face it, PC users rarely care about who makes the system. Compaq, Dell, Gateway, IBM, who really cares. Oh there are some functional issues, or price issues, but in the end, it's a PC, it has an X86 chip...(yawn). Mac users care about their computers. They paint them funky colors, they create works of art out of them, or even fish tanks. They keep ancient Macs because it was the first Mac they had, and "you can't just throw it away." I know people with complete sets of System 6 floppies, because that was what the learned the Mac on, and they just can't throw them away. We have companies with mottos like "It doesn't suck".
And it's true, we love our machines. But there's the other half of the equation. The people. The Mac community is a really unique community. The first MacWorld I ever had a function at, I stayed at someone's house in San Francisco. Along with about three other people. Ron just opened his house to essentially three strangers. Why? Because they needed a place to crash, and they seemed okay is my guess. Ron's not unusual. There are examples of that happening all the time. You know someone through a mailing list, or some other mechanism, and they need a place to crash, so, "Hey, I've got a spare bed in my hotel room, crash with me". We figure out arcane schemes to get people to their first expo. People helping each other out, and making a friend or two along the way.
Even the writers are weird compared to PC writers. Adam Engst, of TidBITS fame, told a story at a MacWorld about how he had won a prize at MacHack, a big piece of wood. He left it in his hotel room, hidden away, and the next year, got the same hotel room, and behold, there was Adam's stick. Now, most computer writers would have laughed a bit, and that's the end. Adam turns it into a paper on big pieces of wood and hotel rooms as a memory subsystem. Great capacity, horrible latency. Because the Mac inspires whimsy, and humor, we get cool things like that.
Another example of the Mac Community was Don Crabb. Don was a great writer, and more importantly a great guy. A good human being. He died last year, and more has been said about him than I have room for, and better than I ever could. But one of the coolest, and most low-key tributes came from Apple. There was a Tech Info Library, (TIL) article devoted to him. Number 75050. It simply said, "Thanks Don". Again, we don't just "Think Different", we act different. Sometimes, that's a pretty good thing.
But one of the coolest, and oddest things for the rest of the computing world has to be MacWorld Expo. MacHack, the WWDC, these things have analogues in the PC world. But nothing makes the WinTel crowd move hurriedly to the other side of the street like MacWorld Expo. It's more than a trade show, it's also the biggest social event in the computer industry. There are people that I consider good friends, but I see them about twice a year. Once in New York, once in San Francisco. (Heck, even the nostalgia about MacWorld Boston is unique. Would anyone really care if Comdex changed cities? But hoo, there are still people mad that MacWorld isn't in Boston anymore.)
The best accounting of MacWorld Expo came from a PC Writer, in a PC publication, (I've forgotten the name of both, and if anyone can connect me to it, I'll be eternally grateful.) She talked about having to cover a MacWorld, and to her, this was like another trip into the lower circles of Hell. Because, for the most part, PC conferences are not fun. In fact, Comdex is not a reward, but a punishment in most cases. PC conferences tend to be annoying, garish places, where you are assaulted by booth babes and marketing wonks, showing you the next greatest thing, which you'll never actually be able to buy, but isn't it cool, and they're taking advanced orders. They're tiring, and you just don't want to go. So off she goes to MacWorld. And a curious thing happens. She notices that for one, you can actually buy stuff.
And when she's talking to someone about a product, they turn out to be one of the developers. Even the people who look like marketing people, heck, even the ones who are marketing people know the product. And can answer questions. Intelligently.
But then she sees the people. And notices the true feature of MacWorld. She described what looked like a family reunion. Squealing, hugs, "Ohmigods", the whole bit. Except, these were just people who only see each other at the expo, and normally only communicate by email. And she realized something. MacWorld Expo is more of a community gathering with a bit of computer thrown in, than a computer trade show. Here's all these people in the middle of what should be an agonizingly bad time, and yet, they're all having fun, laughing, planning their party schedule, etc. It was just a bunch of people having a week-long party. She finally got why Mac people are the way they are. Not the box, but the other users. It made her think about the computer in a whole new light, and a far better one at that.
I'll close with my own little Mac story. I had gotten a piece of email from a woman in England, who was buying a Mac, and was just having a terrible time trying to find her "Virtual Machine". Everyone told her she needed one, and she said she was just almost beside herself trying to find one that ran on the Mac. She had read one of my columns in MacWeek, and was hoping I could help her out, if I wasn't too busy
My reaction was...????...um...sure!
But as we emailed back and forth, I realized that she had a bunch of people spouting Java-isms at her, and some other technical things. So over about two or three weeks, and a few screenshots, I managed to explain to her that she didn't have to worry, everything was fine, and here's what they were really talking about.
When it was all done, she sent me the nicest email I have ever received thanking me for taking all that time to help her out. She was just so happy to find out that Mac writers are so nice and kind and helpful that she was going to insist that her family, and all her friends buy Macs too, because I took what added up to about half an hour to help her. Now, I'm not that much of a boy scout, but I just felt bad for this person, so I gave her a hand. And I realized that this large network of people helping each other out, via mailing lists, forums, MUGs, spare beds in hotel rooms, that here was the value of the Mac. Not altivecs, megahertz myths, or titanium. But a bunch of people being nice to each other more often than not.
And after all, isn't that what Christmas is all about?
Bad Installer, No Doughut
created 18 Dec. 2001
What is going with installers and applications these days?
Or more properly, what is going on with the people writing installers and applications these days?
My guess would be that they don't want to spend the time to turn installation from a chore to something easy, as it should be. Now, when I say this, I am not speaking of drag and drop installs of single products. Those are simple, flexible, and easy, as they should be. But there are some folks out there who need to be flogged until they 'get it', at least about installers
First of all, kernel extensions are not to be used because they can be. In fact, and Apple has been very consistent about this, they shouldn't be used unless they have to be. But I am starting to see them used like Mac OS 9 extensions, and this is a very bad thing. Especially after some of the folks on the Omnigroup Mac OS X Admin mailing list managed to beat the proper use of kernel extensions into my head, (not an easy task!), I now realize how bad. And how unnecessary. For example, I installed, for about 2 minutes, the Norton AntiVirus version 8 for Mac OS X Beta. Then I looked at the list of installed files, fired up Terminal, and commenced to deleting three, count 'em THREE kernel extensions.
Can someone please tell me why an Anti-Virus utility needs any kernel extensions? Especially when I'm using one already that doesn't need one at all, namely Virex 7? Let's analyze what an antivirus application has to do:
- It has to, when ordered, analyze files for certain viral structures. Okay, so that requires read and write access, so that if a virus is found, it can be fixed, or the infected file deleted. No need for a kernel extension here, root access perhaps, but not a kernel extension.
- It should monitor applications and documents that are opened, and analyze them discreetly for virii, and handle them appropriately if one is found. Again, this can be achieved with a daemon, or a background server that monitors this activity. No need for a kernel extension, (kext) here.
- It should be able to, periodically, check for newer updates, download, and install them. Not even in the same dimension as a need for a kernel extension, just use cron.
- It should be able to monitor system activity for virus - like behavior, and act accordingly. Okay, but again, this can be handled three miles over the kernel.
- It should monitor media insertion, such as Zip, CD-R, etc, and automatically scan them if desired, for virii, and handle this appropriately. This isn't a kernel extension function either.
- It should have a multi-layered notification system that informs you of any virus activity. Hello, email doesn't need a kernel extension.
In short, there is no logical or functional reason for Norton Antivirus to need three kexts, except that I imagine it made life easier on the programmers. But tough noogies, that's part of being a programmer. You do the work, so that the user, you know, the people who trade you money for code, gets a safe, functional application. I have no pity on a company that destabilizes my system to save a buck.
First on the witless parade is the "Quit all running applications to install this." Excuse me? I don't even have to do this to install updates to the OS, so you want to give me a reason why I have tos install an application? Other than, "But that's the way we always did it on Macs", which is not a reason at all, you don't have one. That's not even trying hard. I mean, come on, look at the screen. New OS, works differently, you don't have to do this, and it's pretty rude to do this. Next you're going to tell me to disable extensions and rebuild my desktop? Give me a break.
Next is the "You must authenticate to install anything" trend. Knock it off. I should be able to install any thing I want in my home directory without permission. (That's a benefit of Unix by the way.) Unless the application needs access to a directory outside of my home directory, or I don't have the appropriate permissions for the directory I am installing to, the authentication dialog should never come up. Don't even try to tell me that every single application that I've installed in the last few months that wanted a password couldn't have installed in my home directory just as easily. That's just being lazy, and if you do this, then you deserve to have your product returned, and nastygrams sent your way. Spend three minutes when you build the installer for crying out loud, it's not rocket science, heck I've done it, and all I can program is AppleScript!
Right after them on my latest list is "You cannot choose where your application shall be installed, only I, the installer maker, am wise enough to do this." Could we be any lazier? I don't see how. It's a mac people, maybe I want my software installed three hundred layers deep in my home directory. Maybe I want to install it on a different partition. The point is, it's my decision. If there are components that need to be scattered in a different place, because there is no other way for your application to function, then put those there, and let me put my stuff where I want it. Virex is both a nice and a poor example here. The GUI tools go in one directory, the command line tools go in another directory. Since the CLI tools have to go in a specific place to work well, that needs the authentication. But NAI folks? Let me tell you, I have moved that GUI all over the place, there is no reason why I can't pick a location of my own choosing for where it installs. Again, laziness. No one wants to waste time and money on installer writing.
Which is about the stupidest thing I've ever heard of. The installer is the first impression a user has of your product. If they are angry before they even get to use your application, do you really think the are going to be willing to work with any other kind of annoyance? Hardly. The installer should get as much care as the main loop of the application, because in a sense, it's just as important. If there is a reason for something to go somewhere specific, then put that in the installer notes that pop up during the installation, or make that part a component. Don't make me authenticate every single blankety-blank solitaire game I try.
As the saying goes, you never have a second chance to make a first impression.
Virtual PC 5, 1.0 all over again
created 5 Dec. 2001
Virtual PC 5, 1.0 all over again.
Well, having worked with the first Mac OS X - native version of Virtual PC, I am struck by a certain amount of nostalgia, namely the memories of Virtual PC 1.0 on a PowerBook 3400/240. Now, this is not to say that VPC 5 is a bad product, just the first major OS X release is a tad on the slow side.
Now, just to establish background, I'm running VPC 5 with Win 98 SE on a PowerBook G4/500, 640MB of RAM, a 30GB hard drive with 15GB free space, my video card is the Rage 128, and I'm using two monitors. OS X version for testing is 10.1.1, and in line with the way I work, I didn't test VPC running in 9.2.1, as I don't boot into Mac OS 9 unless I am running DiskWarrior.
VPC installs quite well, albeit slowly. This has nothing to do with quality, or lack thereof, but rather that there's a lot of data to be dumped onto your hard drive, (the Windows 98 SE drive image is sitting at about 722MB in its current form.) You will need an admin password during the install/setup, but that's the only real inconvenience. When running VPC 5 under OS X, the drive images are installed by default in the Documents directory in your home directory, or ~/Documents. The setup itself is fairly simple, mostly setting how much RAM you want your Virtual Machine, (VM) to use, network settings, etc. I set my VM RAM to 384MB, and used the "Virtual Switch" network setting to give my VM its own static IP address, so it could run well behind my firewall. The rest of the preferences are fairly standard for Virtual PC, nothing radically new here. Since I was only working with Windows 98 SE, I didn't test the multiple OS functionality, that will have to wait until I can get a copy of OS/2 Warp Server 5, (A virtualized journaled file system...that should be fun to work with.)
The functionality is what you would expect from VPC, you get a PC OS running in a window, or in full screen mode. You can do almost anything with VPC that you can with a native Intel - based operating system. (Note here: I didn't try games, and I think that people who use VPC for games are trying too hard for what they get. If you want to play PC games, go buy a cheap PC, drop in a nice Nvidia card, a chunk of RAM, and play away. You'll be much happier.) While not the way to run games or 3-D software, if you need to use things like Project, Access, or other Windows/Linux/etc. - only apps, it will do the job. It's also handy if you are doing cross platform web work, or using other cross platform tools, like FileMaker Pro, RealBasic, etc.
The only real problem here is the speed. While not as Glacial as Gnome on my system, menus take a couple of seconds to pop up, doing a reboot of the VM OS takes around five minutes, if I open a window to "My Computer", etc, I can watch the various window elements fill in. Now, VPC was never a speed demon, but this reminds me of masochistic experiments with running OS/2 Warp 3 on a 386 in 4MB of RAM. If you set Win98 for opaque window dragging, you can get the cursor so far ahead of the window that you get redraw fragments on screen. I haven't had a chance to test this with Office 2000 or XP yet, but I'm not really looking forward to it.
Now, let's understand here, this speed issue is fixable. It's an optimization issue, and as anyone who's done development knows, optimization has to happen last, otherwise you are optimizing a fluid code base. As well, you are better off waiting for real-world complaints, and then deciding what needs to be optimized. So in general, think of VPC 5, (For OS X anyway), as a 1.0 product. It's nicely done, and works well, but is just slow. "It's just learning to walk" would be another way to put it. If you need it OS X Native, then by all means upgrade. If you don't, then you might want to wait for an update or two before laying out the cash, unless you simply have to have a new sparkily.
created 14 Nov. 2001
Office is finally here, woo-hoo!
Well, native in Mac OS X at least. Which of course, is what Office users have wanted to see. The wait is over as of Nov. 19th, and it's been worth the wait.
(For Excel fans...I have used Excel about five times since the program's creation. Not through any dislike of the product, I just don't use spreadsheets. So, rather than regurgitate marketing info, i'll give you my one impression, and that is, the 'pop-up' effect in Excel X is a nice idea. Good user feedback is always welcomed.)
One thing to keep in mind here, is that if you are looking for a plethora of new features, you aren't going to find them. Microsoft chose, (wisely I think), to make the first version of Office on OS X a truly outstanding Carbon application. Considering the size and complexity of Office, they did this. The interface, the way Office works, the integration with the OS, everything. Office v.X is, for me at least, the premiere Carbon application, and shows just what can be done with Carbon. About the only thing Office missed the boat on, and this has more to do with timing than anything else are services, newly available for Carbon applications in Mac OS X. To my knowledge, the first carbon application to supply these to the user is BBEdit 6.5, from Bare Bones.
That's not to say there are no new features, just not a lot of them, and most aren't screamingly obvious. Again, this version is meant to be a great Carbon application. New features will come, and considering that any new thing is a potential new set of bugs, I can wait. Besides, with version, there are enough improvements overall that my wish list is quite small.
One of the nice things about Office being native on Mac OS X is the PDF support. Not just for printing, but for importing documents. I love PDF. It does everything I need it to do, and it's supported everywhere. As long as I am careful about fonts, PDF does whatever I need it to do. Unfortunately, until Office v.X, what it couldn't do is be a graphic format in Word, PowerPoint, etc. Now it can be. In truth, this has a lot to do with QuickTime in Mac OS X, but still, being able to use one format almost everywhere is handy. Another overall nice feature of Office is transparencies. You can use them in Word, PowerPoint, and Excel. (No, you don't get them in Entourage, and I can't figure a reason for an email app to have them.) Transparencies are one of those things you don't get until you use them the first time, and then you wonder how you survived without them. Kind of like multiple monitors. They just give your images an added dimension, and allow you do enhance things in ways you couldn't do otherwise.
The reminders mechanism received a rework as well. Instead of an extension/control strip module, this runs from a background application called the Microsoft Database Daemon. This handles the Entourage X database access from various Office applications, and also serves as the new event reminder server. In additions to the traditional Office events, meetings, tasks, etc., Office v. X now supports .NET notifications. So, if you are using an application that integrates with .NET, like MSN Messenger, etc, the notification system can integrate with that as well. This marks the initial integration of MBU products with .NET, and shows that Microsoft's commitment to the Mac OS is not a minor one.
.NET integration brings up a rumor about Office v. X that bears quashing. The registration process in Office v. X is not the idiotic node lock scheme that Office/WIndowsXP are using. The only change is that when you start up and Office v. X application, it looks for any other instances of an Office v. X application with the same serial number. If it finds one, and the serial number isn't for multiple copies, you can't use that application until the other one quits, or you get a different serial number. Microsoft will be making multi-copy numbers available, so you can still use one number for multiple copies, you just have to pay for it. Before anyone screams in protest, Microsoft is a late comer to this party. Filemaker, Adobe, and I believe Quark all check for duplicate serial numbers on a network before starting up, so Microsoft is hardly being draconian here.
A somewhat surprising new feature of Office v. X is the inclusion of a demo version of RealBasic in the package. RealBasic now has some excellent Office automation features, that give you the capabilities of what you can do with VBA, but since these automation applications are separate applications, and not embedded in a Word or Excel file, they don't carry the virus danger that VBA is saddled with.
Microsoft has done an excellent job of integrating Office in with the new printing architecture in Mac OS X, via print dialogue extensions for Word, PowerPoint, Entourage X, Excel, etc. Not only that, but the are also leading the way in not scattering cruft everywhere. With the exception of the preferences and the Documents folder, everything Office needs stays in the Office folder. If any background process, like the Database Daemon, needs to be run, the entries show up in the Login pane of the System Preferences application. That's on a per-user basis, not a machine basis.
It's important that a major developer like Microsoft is seen to be doing its best to follow the rules for such things in Mac OS X. It gives the other developers less ability to point a finger and say, "Well they're doing it, so I am too!" Now, this unfortunately means that you loose the Microsoft Office Manager menu, but so be it. Although with the new ability to place things in the menubar, that's not to say they couldn't do this in a later release.
Another missing feature, and one that Microsoft has caught a lot of flak for, unfairly I think is Palm integration. This release of Entourage X doesn't have it. And no, it's not because Microsoft hates Palm, or that they want us all to buy PocketPC devices. It's because Palm hasn't gotten the conduit development kit for Mac OS X out yet. That's right folks, it's PALM's fault, not Microsoft's. So if you feel the desperate need to flame anyone over this, take a cold shower. If that doesn't work, go yell at Palm, they're the problem here, not Microsoft.
Finally, we have to touch on the integration. Face it, that's one of the big selling points of Office, or suites in general. They work well together. And there's a payoff. I like having a single dictionary. I like being able to have applications I use a lot work together in a coherent manner. And I like it when they look good doing it.
Okay, I have to touch on this. Office looks good. I don't mean, "It complies well with the rules of Aqua." I mean in a SuperFly, P-funk smoooooooth good. Hear Barry White saying, "It looks gooood." and you have what I mean. This is the best interface MS has ever come up with, and the first one I didn't feel the need to immediately hack apart. The toolbars are less obtrusive than they were in Office 2001, they get out of my way, and it takes less time than ever to get the dancing baloney, AKA "Max" sent back to his little oubliette.
Now, the most important part of Office, for me, is Entourage. I love this application, and in truth, outside of Baldur's Gate II, it's my favorite Mac application. I use this to run my life in something vaguely resembling an organized fashion, and have since its inception.
Email is the centerpoint of Entourage X, and there have been a lot of improvements here, although most of them are behind the scenes. I found that IMAP to be much faster than in Entourage 2001. Downloading messages happens much faster, which for me is important, since I am a huge fan of IMAP, and use it as much as possible. The release notes talk about improvements to various servers that gave Entourage 2001 issues, but since I haven't used those servers, I can't verify them.
Entourage X's rules are still top notch, but they bring me to my first quibble with the product. With IMAP folders, if you aren't careful, Entourage X will apply rules to mail in your deleted items folder, and your sent items folder. If you are like me, and keep copies of every email you send, this can become quite annoying, and in quite a hurry. Now, you can create a rule that compensates for this, make it the first rule in the list, and you're fine, but I would like to see a checkbox in the mail preferences that says "Only apply rules to mail in my InBoxes".
The schedules in Entourage X are unchanged from Entourage 2001, which is fine, they are excellent as is, and I am very happy with the ability to have all my IMAP server folders checked for new email in addition to the InBox. This is important if your email server supports server-side rules, as then the email is filtered before it ever gets to the client. If you doubt the value of this ability, I get around five hundred to a thousand emails a day. While Entourage X's rules handles them nicely, if I have to spend any time using another computer, that kind of filtering backlog takes a while, even on my PowerBook G4. If I can have the server do the work for me, then my filters are applied, regardless of what machine I'm using.
This brings me to another quibble in Entourage X. While the rules are excellent in scope, and capabilites, if you have sixty or so that you use, and you have a lot of email, it can take them a while to run. Now, this isn't so bad, except for the fact that when you are working while IMAP rules are being applied, Entourage X stutters hard. I've noticed delays of about a second per rule here. It's annoying, not a deal breaker, but if you have a backlog of messages, it's often not worth the effort to try and work while the IMAP rules are running.
Now, this is still nicer than Entourage 2001, which would drag down the entire system under OS 9, but it does show that Microsoft still needs to work on the threading of the database operations. (This doesn't apply to POP rules, as at least in my experience they run extremely fast, before you notice them usually.)
I have heard a lot of complaints about the database in Entourage in general, and I have to say that I have only had one case, and this was a beta version, where I actually lost any data. My experience has shown the database to be solid and reliable, (I admit that under OS 9, I did have a severe allergy to third party extensions, which helps things.), and without it, Entourage X simply could not do what it has to get done. Yes, some database operations can be quite slow, and in general, I run a beginning of the month cleanup and optimization of my database, which helps keep it happy, and is a good idea for databases anyway, but, I have yet to see anything that supports the "Entourage's database is unreliable" accusations. Usually, you find that there are a few other external factors that get ignored in a juicy round of Microsoft bashing. Another point, if your database is just dead, dead, dead, can't be fixed, you can still recover the data. Use BBEdit, open the database up in that, and you can yank the text info out, so all you end up losing are some events and attachments. (Another reason why I like IMAP over POP).
One new feature that made me really happy, was that I can now see both folders and mail in IMAP folders. Entourage 2001 was a one or the other setup, and while within the IMAP specification, I found that limitation to be quite annoying. I have to give some sincere thanks to the folks on the Entourage X team for fixing that.
The interface in Entourage X is changed as well, and although it took me a bit to get used to it, I think it's an improvement overall. Having the views readily available without having to scroll through a rather long folder list is handy. In truth, I never used the folder view for the different functions in Entourage 2001, it was too tedious. I just used the command - key equivalents for the windows. With Entourage X, I find that I use the UI buttons more, so in that sense, the redesign is a good one. You can still open the calendar/address book/notes/etc., in separate windows, just ctrl-click on the buttons. I am also pleased that if I quite Entourage X with my address book and calendar windows open separately, that setting is respected when I start Entourage X back up. A minor thing, but the kind of detail work that makes Entourage X a great application, instead of a good application.
One problem for me, and a lot of people is the issue of SSL support in Entourage. Well, actually, it's more of an Internet Explorer problem. See, SSL uses certificates so that you can prove you are who you say you are. A lot of places, especially universities, use personal certificates heavily. Unfortunately, Internet Explorer on the Mac has absolutely no support whatsoever for personal certificates. Site certificates, yes. But personal certificates, no. This means that if your email system is using them with SSL, you are going to have some real SSL issues with Entourage X. Now, since you can't use IE, what would be a nice workaround is to allow the use of a personal certificate - enabled browser that isn't IE, like Mozilla. (This isn't a platform as much as a Microsoft issue. IE on Windows only got this capability in the last few months.) Even better would be independent certificate capability, and / or Kerberos capabilities. Entourage is too good of an application to be this hobbled by a weakness in IE.
Speaking of the address book, I find that Entourage X's address book is, without a doubt, the best email address book available, and better than quite a few standalone PIMs. It gives me room for as many categories, phone numbers, and email addresses as I could ever need. It is, so far, superior to any other email application I've tried. I also like the fact that if I enter a name that has multiple email addresses, I get the default first, and the rest in a handy little pop-out menu, that I can navigate from the keyboard. This is far superior to Eudora's nickname handling which dumps all the email addresses in there, and makes me delete them manually. Then again, in Eudora's defense, the only version native on OS X is a pretty shoddy beta, so that may get fixed some day, if the final version ever comes out.
Entourage X continues the excellent LDAP integration of ENT2001, so if you have an LDAP directory that you use for email addresses, or even multiple directories, you can easily find the address you need from them while composing your message. Unlike the LDAP implementation in Mail.app, Entourage X's actually follows branches.
Let me explain why this is important. I'm a network administrator. So when I set up my LDAP server, I create a main branch for the company with a search base of:
I then create sub-branches so that I can have smaller address books within divisions, making dealing with them a little easier. I also create a groups branch that only has listings of various company groups, so that they can be found quicker. So we have the following sub-branches:
Now, since I like to keep certain addresses hidden from the general user population, I put them on the main branch, and just don't allow access to them. This is done by creating separate entries in user address books for each sub-branch. So far, either Mail, or Entourage X work great. But, now I have access to the main branch. So, instead of creating, (in our example), a sixth entry, I just use the main one, o=mycompany,c=us. Entourage X has no problems here, it traverses the sub-branches, and finds what I need. Mail can't do that. It can only see what is on the branch it has information for. So to see all my addresses, I have to have six separate entries in Mail. So while Mail's LDAP is better than nothing at all, it really falls down compared to Entourage X or Mozilla .95.
The Address book is another area where the database - driven integration of Entourage X's features comes in handy. I can easily link contacts, emails, calendar events together, so that I can keep track of things easier. I can also do a custom find for say, all email from Bob, sent to me with the subject line of "Lotto Results", and save that as a custom view. Then I just open that view, and poof, the result of my Lotto attempts in all its glory.
Since we talked about the calendaring in Entourage X a bit, we should take a look at that a bit more. The calendar has a new interface, like the rest of Entourage X, and it's pretty good. I don't see any major use differences between the versions, but it doesn't get in my way, so I'll say it's a good interface. I like the different views, not so much the standard day/week/month stuff, but that I can resize the custom views pane, so that in addition to the current month, I can see the next two or three months, if I have the screen real estate.
While Entourage X cannot hook up to any kind of calendar server, Exchange, Steltor, or MeetingMaker, it can use iCal or vCal to send and receive meeting/event notices. Now there are ways to get information from these servers into Entourage X, but you can't just say, "Here's my calendar server", and gain the features that those systems give you. But you can do a very good job of serverless calendaring, and scheduling, and all via internet standards that aren't owned by anyone, and all the servers I mentioned support these.
Since I mentioned Exchange, let's take a second to kill a myth. Entourage X is not, repeat not the new Exchange client. Entourage X only supports internet standards for messaging and calendaring. If you need an Exchange client, you are stuck with Outlook 2001 in Classic for a while longer. As far as I know, Entourage X is not replacing Outlook. Will that change some day? Who knows? I don't do predictions. But for now, it isn't happening.
The other two major windows in Entourage X are tasks and notes. Neither of these are that different from Entourage 2001, although Notes has new prominence.
Finally I have to mention Entourage X's AppleScript support. It's simply excellent. Superb. Outstanding. Get the idea? This is how an application should support AppleScript. The dictionary is full - featured, and complete. You can do almost anything with AppleScript that you can do manually, not counting reading the mail for you. If you deal with any kind of repetitive email process that the mail rules cannot handle for you, then the AppleScript dictionary of Entourage X can. I really hope that Mozilla, Apple, and Eudora all use Entourage X's AppleScript implementation as the bar for their own. It would make their products better by an order of magnitude.
Word X is pretty unchanged from 2001. With the focus of Office X, this is not surprising. There are some new features, chief among them being non-contiguous text selection. This is old hat to users of Nisus, but it's been a long time in coming, and good to have. Oddly enough, one of the biggest changes in Word ends up being the preferences. They have, and with good riddance, dumped the n-tab interface in 2001 for a more coherent vaguely System 6 control panel-ish window. I find it much nicer to navigate and use, the tab thing was just too tedious for what preferences needed to do.
Another feature that initially seemed like a "big whoop" change, but is now critical to my work is the 'clear formatting' ability. There is nothing worse than changing templates, cutting, pasting, etc, and getting formatting conflicts. No more! Just select your text, hit the 'Clear Formatting' style, and bang! No formatting. You can then apply other formats without worry of conflict. If you do a lot of writing, the advantages to this are almost immediately clear. When you combine it with non-contiguous text selection, this becomes a powerful weapon. Considering how frustrating format conflicts can be, and since there is no easy way to just view the formatting structures, (although it would be an excellent feature, (*cough* ShowCodes *cough*), especially in long documents), being able to at least quickly clear and reapply formatting is 'a good thing'.
Other than the obvious appearance changes, the interface has a feel of having been buffed and polished more than reworked. The formatting palette does a graceful swoosh in and out of the toolbar when you activate it or deactivate it. TextArt now allows for transparencies, like other components in office. You can use PDF files as embedded images, and apply transparencies, effects, etc. to them as well.
I was quite pleased to see that the Equation Editor had been carbonized. While not a universally used feature, for those who need it, the Equation Editor is indispensable. The editor is actually a simplified version of MathType. Now, for those of you wondering quite correctly, what about MathType, well, they haven't abandoned the Mac. They are working on a full carbonized version of MathType for Mac OS X, that will be more in line with the current version 5.0 Windows version than the 3.7 Mac version. While it won't be available until some time in 2002, the fact that they are working on it is good news for all.
The MathType issue does bring up a point for people using Word addons. If the addon isn't carbonized, then you really want to talk with the developer of that addon, to make sure that you won't be making a mistake by going to the new version of Office. If you have any custom VBA applications, test them before making this decision. I personally didn't have any major problems with the transition, and use Word X as heavily as I used Word 2001, but if you have custom macros or addons, test them before making the switch.
Word X seems to save faster than Word 2001, but that could also have a lot to do with my going from a 1999 PowerBook G3 to a TiBook recently, so speed judgments are quite relative here. Scrolling works well enough for me, although I admit to not getting the use of scrolling as a benchmark. There are so many other, better ways to get from point a to point be in a document, that don't involve scrolling hundreds of pages of text.
One big benefit of Word X is that you no longer have to deal with memory allocations. Anyone who used Word 2001 discovered quickly that the default memory settings were humorous at best, and a sick joke at worst. In my case, due to the way I used Word 2001, I set my memory allocation at eighty-five MB, and it was happy. OS X takes care of this, and good riddance I say.
On the downside, Word's AppleScript implementation has real issues. It's not that it lacks completeness, or that it's hard to figure out. It's just that AppleScript doesn't work well at all in Word. The reality of it is, if you want to script Word reliably, you have to either learn Visual Basic, or use RealBasic. Until some changes are made, AppleScript and Word just don't play well together. At least by including the demo of RealBasic with Office v. X, Microsoft gives you a different option than liking or lumping VBA.
The auto formatting in Word is still as...annoying as it has always been, and I still go into the two or three places I have to turn it off. Adding things to the Auto-Text entries is still annoyingly inconsistent, and while I understand that this is just impossible to fix, the auto-capitalization feature is almost more trouble than it's worth. Another feature that falls into this category is Word's HTML output.
While better than Word 2001's, as in, it works, and Word 2001's never really did for me, this is still some of the most atrocious HTML on the planet. Actually, it's just that Microsoft is so into XML, that they haven't figured out that being able to just replicate a Word document in as simple of HTML as possible would be a good option. XML has its place, but not everywhere, all the time.
In general, Word has been quite a dull upgrade though. That's good. It means I didn't lose time dealing with some 'interesting' new feature or quirk. The integration with the other components of Office, in particular Entourage's address book is smoother, always a nice touch, but the attitude with Word seems to have been 'leave it alone wherever possible.' Unfortunately, this was also applied to old problems as well as new features.
One feature I would love to see, and would by a new version for alone is the ability to create PDF documents of the same quality as the Acrobat Plugin for Office on Windows gives you. I'm not talking about simple PDF output. I mean that tables of content are kept intact, end note links are live, hyperlinks are live, etc. In other words, the structure of the Word document, not just the appearance is kept intact when going to PDF, with little more work than hitting the 'print to pdf' button.
In the past, you had to rely on Adobe for this, and since only 10% of Acrobat sales come from the Mac Market, they didn't see a point in doing this. (Before you start up about the 10% number, that came as an answer to a question I asked of Bruce Chizen, the head of Adobe, at Seybold Boston, 2001. If you don't like that number, fuss at him.) That's an honest response. Why do that much work for 10% of your market? I don't agree with Adobe on this, and I think they have done a horrid job of marketing Acrobat to the Mac audience, but it's a logical answer.
The final entry in my review, and the other component, (that I use at least) to get a lot of work done on it is PowerPoint.
First off, the new graphics handling capabilities are fantastic. You get a lot more control over things, more formats, more effects, which for an application like PowerPoint is what you want. I find that PowerPoint X is much snappier, both when building slides, and using the slide sorter view than PowerPoint 2001.
The new packaging feature is a welcome addition, and one that I always liked from the Windows side of the house. It's a nice alternative to the Quicktime Options, as there are times when Quicktime is not as good an option as you might think.
An unexpected benefit for me was a side effect of running natively in Mac OS X. As with any other application, you can easily print a PowerPoint slide to PDF. What many folks in the Mac community don't know is that Acrobat, (NOT the free version), has the ability to turn a PDF file into a pretty sophisticated slide show on it's own. PowerPoint slides, printed to PDF give you a solid basis for this, and then Acrobat allows you to add various effects and transitions, and create a presentation that runs anywhere you would want it to, in a nice open format.
One thing I've started making more use of in PowerPoint X more than in PowerPoint 2001 is master slides, multiple masters, etc. I don't think that this particular feature was improved as much as the help on it seems more clear in PowerPoint X. In any event, if you use PowerPoint regularly, then learn this feature.
The QuickTime exports are essentially unchanged, outside of any QuickTime changes in Mac OS X. Again, if you use this a lot, then learn how to use QuickTime Pro, your movies will improve greatly.
I did notice that copying slides between presentations, or within a presentation works much better than in PowerPoint 2001, and a gander at the preferences reveals the "Keep designs when copying slides between presentations" checkbox in the advanced tab of the preferences. Yes, I said tab. Unfortunately, unlike Excel and Word, PowerPoint retains the older tabbed - window interface for its preferences. This needs to be fixed, consistency counts in a suite. Entourage could use some work here as well, especially when you realize all the places you have to go to make settings changes in that application.
One feature I'd like to see would be of great benefit to folks using PowerPoint with multiple monitors, like yours truly. That would be to have the presentation running on one monitor, say a LCD projector, and have a window with nothing but a thumbnail of the current slide, and my presentation notes for that slide. That way, I could be completely paperless. Another benefit would be to control the presentation from the notes slide.
Well, that was quite an article. Again, I apologize to Excel users, but honestly, I just have used it so little, that all I would be doing was regurgitating what you can get from Microsoft's web site on Excel, and I personally hate those types of reviews, so why write one?
Mac OS X 10.1 pt. 5
created 21 Oct. 2001
Well, this is the final chapter in our look at Mac OS X 10.1. All in all, I have to say that I am extremely pleased with this release, as an administrator and as a user.
The speed and performance increases are more than welcome, and judging by the increase in announcements and products from major Mac ISVs, like Adobe, Microsoft, Deneba, and Corel, 10.1 seems to be the version of Mac OS X that they have been waiting for as well. I'm also pleased to see Carbon applications starting to take advantage of the new Services access, (BBEdit wins for the first Carbon application to do this with their 6.5 release. They also get a HUGE thank you for their improvements in support for the OS X command line, allowing me to replace my use of emacs with BBEdit. If you use BBEdit, run, don't walk, and get this update, it's worth every penny. And at under $40 dollars to upgrade from BBEdit 4.5 or later, or DreamWeaver 1 or 2, and $79 to crossgrade, or upgrade from BBEdit Lite, it's not going to put a serious dent in your bank account for what is a major upgrade.)
Microsoft's Word public preview is available, and gives you an idea of the amount of work they have put into the OS X version of Office. The non-contiguous selection ability, and other improvements show the seriousness with which Microsoft is taking Mac OS X. Instead of just doing a basic port, and being done with it, Microsoft is working hard to make Office v.X the premier application for this operating system, and from what I have seen, they have succeeded in setting a high standard for other developers.
But it's just not the big vendors either. Long time Mac utility vendors like CE Software, (QuickKeys), and Power On Software (the upcoming release of Now Up-To-Date and Now Contact) are finally showing up.
It's not just traditional Mac vendors either. With Mac OS X in general, and 10.1 in particular, the oceans of useful stuff on the Unix side of the world is flooding in as well. This may even help Mac users who desperately want native versions of applications that used to run on the Mac, but were dropped for various reasons. As an example, while RSI recently announced that they are dropping the long - awaited Cocoa rewrite of IDL, they are looking at doing an port of the X - Window version of IDL. Would that give us a juicy Aqua shell around IDL? Nope. Would it give us one of the premier scientific imaging applications running on a processor that it just screams on? Definitely. So is this "A Good Thing"? Without a doubt. (For more information on IDL and Mac OS X, go to RSI's web site. Details, and contact information are available there.
In short, the long wait for software on Mac OS X is soon to be over.
I also can see, now that 10.1 is out, an increase in hardware support for Mac OS X. Printer support is already better, scanner support seems to be following close behind. I also am hoping, now that the Mac has the plumbing to really support it, that other things, like better hardware RAID support, full Universal Power Supply support, and other such things will be coming soon. This is where the Unix part of Mac OS X can give us the features that we have wanted, and needed for so long.
That's not to say everything is perfect. There are still some issues that need resolving, but the picture is much brighter than it was for Mac OS X 10.0.X. Hopefully, by MacWorld San Francisco in January, we'll have some really cool things to talk about.
Mac OS X 10.1 pt. 4
created 21 Oct. 2001
In my last article, we took a look at the BSD environment changes in Mac OS X 10.1, good and bad. This time, let's take a look at networking, as Mac OS X is, at its heart, a networked OS.
Like many of the improvements in Mac OS X 10.1, there isn't one major improvement to networking, but a lot of little fixes that add up to a better overall system. Some of the minor changes involve things like adjustments to PAP, so that LaserWriter printers are supported better, allowing for a greater number of Appletalk interfaces, and some minor DHCP improvements. More small, yet welcome fixes include PPP tweaks for better compatibility with utilities like the Berkeley Packet Filer, (bpf), more modem CCLs, some adjustments to PPPoE for better compatibility and so on.
Other fixes include things like the bugs in the AppleShare implementations that would leave you with a zombie network mount that you couldn't access, or unmount without a reboot. I also don't seem to have the bug where I would suddenly see myself half a dozen times in the Connect to Server window. Bringing back support for AFP over AppleTalk was a good idea, as the Mac world is still reliant on AppleTalk, and until TCP/IP is as simple to use as AppleTalk, you are going to see a real delay in being able to get rid of AppleTalk. (For a good look at a group that is trying to make TCP/IP easier to use, check out the ZeroConf web site, and the IETF ZeroConf pages.)
This is a good sign, as it means that the networking layers of Mac OS X weren't as in need of repair and optimization, as some of the other parts, like the Finder. But, like everything, with all the good, you still have some odd implementation issues.
Now, there is one new feature, the SMB client capabilities that seems to be a major plus. Maybe, maybe not. For one thing, it is, to me at least, more of a checkbox filler. That is, Apple can say, "Look, we integrate with Windows networks." But let's really look at this. One obvious issue is that you can't browse windows networks. You have to use various forms of the 'smb://..." URL to mount drives. There doesn't appear to be keychain support for SMB mounts, (if there is, I haven't found it.) You can't share drives with it, and you can't use it for printing. So while it's better than nothing, it's barely better than nothing. On the other hand, it gives Apple another covered square in "Buzzword Bingo", and makes OS X even more buzzword - compliant, which is always good. But for serious use, you are better off using something like Sharity, or DAVE, as soon as it is available.
Another issue I have with Mac OS X is the way NFS is handled. While it is nice to have this capability, Apple simply needs to make NFS easier to use. Mac OS X needs a GUI tool for NFS sharing and a better integration in the "Connect To Server" window. Mac OS X , and Mac OS X Server need to allow you to set more options for NFS mounts and shares, as when you are dealing with different Unixen, (Linux and Irix are particular problem children with NFS), you may have to tweak more parameters to get NFS to work smoothly. While I am grateful that there is a third party tool like NFSManager available, Apple should make easy configuration of NFS mounts and shares a part of Mac OS X, especially considering the way NFS is trumpeted on the Mac OS X section of the Apple site. I don't mean to get to smarmy, but if the only way to use NFS without third party software is to teach yourself the joys of the 'mount' command, then you may want to not talk about 'chatting up' NFS servers. Unless you consider learning a new language a normal part of 'chatting up'.
While I am happy that Apple has decided to really start using WebDAV as a lightweight filesharing mechanism, there are some parts of it that I'm not thrilled with. For one thing, mounting and browsing WebDAV volumes is, shall we say, tedious? Especially when using iDisk via WebDAV, browsing directories with more than a few items seems to send you into the lower layers of SCOD, (spinning cursor of death) hell. I'm also curious as to why it's so hard to set up Mac OS X as a WebDAV server. It seems to make no sense to not allow you to use WebDAV in Mac OS X to the same extent as you can use AFP over AppleTalk, or TCP/IP. This is a good idea marred by a puzzling implementation.
There are some other things that I would like to see in 10.1, like better Windows printing support in Print Center, and support for LPRng, which brings LPR printing a number of features that Mac users take for granted. I would also like to see better integration for the Kerberos authentication protocol, which is supported in Mac OS X, but in what seems to be a limited manner. Considering the issues with security on any platform these days, and the cross platform nature of Kerberos, I hope that Apple will make Kerberos a more fully supported authentication mechanism for Mac OS X.
So, like a lot of things in Mac OS X, networking is a mixed bag, but definitely better that it was in Mac OS X 10.0.X. Hopefully, we'll see a continuing set of adjustments through the coming months, that will fix the remaining issues.
Next time, the finish to our Mac OS X 10.1 series.
Mac OS X 10.1 pt. 3
created 21 Oct. 2001
Well, since Mac OS X is a Unix - based OS, we should take a look at the BSD side of thing. Now, I'm not going to talk about programming changes, shell scripting, etc. For one, I'm not a programmer. Secondly, there are a number of sites, such as Apple's Developer site, and Stepwise, that will handle these issues far better than I would.
First, there aren't a lot of changes in the overall command line environment. This isn't surprising, as that environment is based on some fairly widespread standards, and for Apple to alter it greatly would cause a lot of incompatibility issues. A reading of the 10.1 TechNote shows that most of the command line confirms that most of the changes have to do with tweaking individual commands for greater compatibility. There some changes that may affect how you do certain things within the command line, such as the replacement of wget with curl. Both utilities are used to download files, and you don't lose capability because of this, you just have to remember that wget is gone, and curl has a slightly different syntax.
The other change that I noticed almost right away, (okay, with every new release, I poke around the Unix command directories, like /bin, /sbin, /usr/bin, /usr/sbin, looking for things like this.) was a new command, bless. bless is found in /usr/sbin, and is a way to set boot options for volumes and in Open Firmware. In a nutshell, bless allows you to set the volume and system folder that the system boots from via the command line. This is especially handy for administrators with OS X Macs in areas that are hard to control. One of the security issues with Mac OS X is that if you manage to boot the machine into OS 9, you can use a utility like ResEdit, or File Buddy to make the Mac OS X invisible Unix folders visible, and wreak havoc with them. Well bless can be a way around that.
If you use bless in conjunction with the -LogoutHook switch in the /etc/ttys file, so that whenever a logout occurs, you set the boot parameters as you want them to be. That way, you can override the settings in the Startup Disk settings panel. If you add the -PowerOffDisabled switch to /etc/ttys, then you disable the Shutdown and Restart options in the Finder, but from a lower level than the GUI, so it's harder to override them. By doing this, you can greatly limit the ability to casually change the boot source for an OS X Mac. Now, this doesn't keep anyone from hitting the reboot button on the case of a desktop Mac, it doesn't prevent booting from CD, nor does it override the option key boot selector. However, if you only have one partition on the disk, and it is set to Mac OS X, then the option key boot selector will only give you that option. The reboot button can be secured via creative mounting solutions for the Mac, so the only override that you can't secure without disabling hardware is the CD-ROM drive. So you still want to keep a bit of an eye on things, but you can at least drastically limit the ways of booting from a different partition. I do have an issue with bless being executable by anyone, but that is easily fixed. The details on customizing the login/logout procedures are available in the Mac OS X System Overview, available in hard copy from fatbrain.com, or as a pdf download from Apple's Developer site.
Another welcome change was in the Console application, which you can now more easily set to keep a log of application crashes in Library/logs in your home directory. Each crashed application gets its own log. You can also set Console to automatically display logs while it is running. These are fairly detailed logs of the application state at the time of the crash, and can help support personnel working with a vendor to figure out why an application is crashing.
There are some new behaviors in the command line, and aspects of the command line environment that I am not thrilled with. There is a noticeably greater delay when opening a terminal window in 10.1 than there was in 10. I'm not sure as to why, some think it could be related to font issues. While the new ability to AppleScript the command line environment is better than nothing, Terminal needs a far better dictionary. Creating a new window every time you run a new, separate command is tedious.
There is no easy way from the command line, or via AppleScript, to create users, or to create users with parameters other than what the Users settings panel gives you. This means that unless you are running Mac OS X Server, there is no easy way to create advanced user parameters. While this doesn't seem to be a big deal, if you are in a Solaris, or AIX - based environment, laying out the cash for OS X Server just to do this is not fun.
Finally, there needs to be an effort to ensure that every command has a man page. This should also be tied in with an Apple-provided connection between the man pages and the Mac OS X help system. There are several third party tools that do this, but Apple should provide this as an aid to Mac administrators who aren't adept at the command line yet. This would also aid in trying to find the appropriate command to perform a task.
So while Apple has done some decent work with the command line environment, there is still room for improvement, especially if you are an administrator. Next time, we'll take a look at Networking, and some other updates in Mac OS X 10.1
Mac OS X 10.1 pt. 2
created 8 Oct. 2001
So, last time, we took a look at some of the Finder improvements, and the AppleScript improvements in the 10.1 release of Mac OS X, so let's look at another area that was in, quite honestly, terrible shape in Mac OS X 10.0, namely printing.
Well, I have to say that I'm quite pleased with the improvements int the printing architecture. Both my Apple LaserWriter IIG via AppleTalk over Ethernet, and my Epson Stylus Color 880 were both found with very little trouble. In fact the Epson required no work at all. By the time I had opened up the Print Center application, the Epson was there and ready to go. That is what plug and play is all about. Seamless integration of peripherals.
The printing options are finally all in place as well. When I manually set up a LPR printer, (HP LaserJet 5Si Mopier in this case), all the output tray options were finally there, and selectable, like the stapler tray. In fact, all of the options in the Mopier's PPD file were there, and correctly represented. The same with the Epson. The beginnings of AppleScript are even available, so although you cannot create or delete printers, you can get printer status, bring up the printer browser, etc. A good beginning, and better than what we had in 10.0.X, which was...nothing.
Another improvement in the printing architecture is the ability to print to a Postscript file, in addition PDF files. For anyone dealing with print houses, or scientific documents, native Postscript generation is a critical function. I am happy about it, because I prefer the native version of MacGhostView for my PDF generation. It gives me greater control over the output, and is much faster than Adobe's Distiller, which is still not native for Mac OS X. But to use it, I need a postscript file to send to it, so the new print file capabilities are greatly appreciated.
These improvements in the printing architecture are critical for a number of applications that Mac users have been waiting on, such as Quark XPress, Adobe's Indesign, Photoshop, and Illustrator, Deneba's Canvas, and other applications that need a full-featured print architecture behind them. Face it, most of these applications would have been crippled in Mac OS X 10.0.X.
There are some things that aren't in 10.1's printing that should be, like IrDA. While not a great feature for most, it is heavily used in Europe, and since a lot of supported Apple hardware have IR ports, there needs to be the ability to browse for, and use IR printers. More AppleScript support is needed. While manual creation of printers is far easier than it was in Mac OS 9, if you are creating 5 or six printer entries, and some of them are LPR, (which has no auto discover capability), and you have a lot of Macs that you are upgrading, this quickly starts to get tedious, and can be an obstacle, or at least a slowdown in OS X adoption.
I also miss the ability that you had in Mac OS 9 to select custom PPD features. I'd like to see the ability to save multiple custom printer settings instead of the standard single setting. This is an important feature when you are talking about printers with the ability to hold a job from printing until a password is entered. Finally, there needs to be a better way to create dummy printers that only print to file. Currently, you have to create an LPR printer that doesn't exist, and remember to manual set it to print to file. With the improved print file generation capabilities, this hole in the feature set becomes more noticeable. But all in all, 10.1 is far more useful for printing than 10.0 ever was.
Another area of improvement is the System Preferences. Right away, there are some new additions that were sorely missed, both functional and aesthetic. Under the aesthetic heading, the new desktop settings panel allows you to set separate backgrounds for multiple monitors via drag and drop. There is still no way to set a folder of images for random backgrounds, although I imagine a third party hack could do this. Universal Access has returned, with a new Quartz-based trick: When you have sticky keys turned on, and you hit a modifier key, or series of modifier keys, the icon for that key floats in the upper right hand corner of your screen. It's larger than a menubar icon could be, and very translucent, so it isn't blocking any information that may be under it. An improvement to the keyboard preferences, which many have wanted for a long time is the 'Full Keyboard Access' tab in the Keyboard settings panel. At long last, you can finally run through dialog boxes from the keyboard. Yes, it's copied from Windows. I don't care, I've wanted this for a while, and I'm glad to have it. Another option here is the ability to bring various screen elements like the Dock to the foreground by hitting a set of modifier keys. Again, if you are on your keyboard a lot, like me, it's nice to not need the mouse as much for this kind of thing.
The General controls have been beefed up with the ability to set number of recent items tracked in the Apple Menu, and to turn off font smoothing for fonts up to twelve point type. This is a good start, but if you want real control over your fonts, check out TinkerTool, from Marcel Bresink. (Actually, if you are a network admin, you should be very familiar with Marcel's work. He has the best, bar none, guide to NIS and OS X integration, and some awesome tools for using NFS and NIS with Mac OS X.) In addition to fonts, TinkerTool allows you to set some handy options for the Dock, and Terminal. Speaking of custom settings, if you do have custom preference panes, there is now a section for those called 'Other' so that you know that these are not Apple - provided panes. A nice option for troubleshooting.
The Login panel has a new trick, which is to give you a list of allowable users for your Mac, ala Macintosh Manager. While not a terribly good way to do things from a security point of view, for educational users, this is a way to smooth the transition. Synchronizing in with this is the ability of the Users settings panel to display a custom icon for each user in this list, ala Mac OS 9's Multiple Users control panel. (While we're on the subject, since Apple seems hesitant to do this outside of Mac OS X Server, could someone write a nice, reliable, way to set all kinds of user options, like UID, GID, shell, home directory, etc? NetInfo Manager is still a really bad application to use for this, and if you are trying to insert Mac OS X into an existing Unix network, not having this ability is really, really bad.)
Classic no longer has the option to hide classic startup, which is fine, it never really worked anyway. The menubar clock now displays 24hr time consistently, nice for ex-military fogies like me. Unfortunately, the Internet settings panel is still nowhere near the functional level of Mac OS 9's Internet control panel, which makes life really frustrating at times. The Energy Saver panel, while adding a few options, still doesn't have the ability to have separate battery and power adapter settings, which is very frustrating. Power Management in general on Mac OS X is still not where it needs to be yet, but it's getting better. Finally, you can have things like your display preferences, volume and modem settings and battery indicators visible in the menu bar for faster access than the System Preference application gives you.
So we aren't done with our look at 10.1 yet, but we are beginning to see that almost everything in this OS has been updated and improved. There's still room for improvement, but it's a smaller room than it once was.
Mac OS X 10.1 pt. 1
created 1 Oct. 2001
So, Mac OS X 10.1 is out, and we can finally start talking about this major update to the Mac OS. First of all, while my laptop isn't the fastest hardware on the block, there is a definite speed improvement to the overall OS. This is due to a fun little trick called optimization. It's tedious, time consuming, and critical to any major development project. The Finder is a major piece of code, and, of the released applications for Mac OS X, probably the single biggest Carbon application to date. Unfortunately, being what it is, it has also been the source, legitimately, of most of the complaints regarding both Mac OS X's speed, and the speed of Carbon applications in general. Well, it looks like the optimization and work paid off. The Finder is much faster in 10.1 than in 10.0. This is not to say that it's perfect, but now it's more useable on a wider variety of hardware than it has been.
But in addition to speed, there are a number of smaller things that I am enjoying. Like the return of the 'cmd-firstbuttonletter' keystroke in dialog boxes. This was one of those minor things that I never realized how much I missed it until I didn't have them. Well, they are back, and my speed within the interface has increased noticeably, due to less mousing. Even better, Apple has lifted the idea of keyboard access to dialogs from Microsoft, so I can set buttons and lists in a dialog box without needing the mouse. I can also assign keys to change focus to the Dock, the Finder, toolbars, palettes, etc. This is a very useful option, especially if you are using a KVM switch, and the sometimes less than optimal mice that are used in those situations. (There's a Compaq keyboard with a wee trackball that pops right to mind as particularly agonizing.)
Menu additions are back, for the things that traditionally were in the menu bar, like battery status, but also for things like Airport control, modem, sound, and monitor control as well. You can still use docklings if you like them in the Dock, but having a choice is quite nice. Apple also gives us the Mac OS X version of OSA Menu, a menu addition that gives you access to all sorts of AppleScripts. Much missed in Mac OS X, it's return gives us two things. First of all, the menu addition is called ScriptMenu.menu, which tells us that Apple seems to have come up with an organized reliable way to handle menu additions in Mac OS X. The installation of the addition is done by dragging the ScriptMenu bundle on to the menu bar. The second thing this tells us is that AppleScript is moving out of its red-headed stepchild status at Apple.
Watching Sal Soghoian, the 'AppleScript Guy' himself demonstrate the new capabilities of AppleScript in 10.1 was more than just good. For the first time in a long time, AppleScript got center stage at a Steve Jobs keynote, and it's been needed. AppleScript is just one of those things that you don't think you need until you realize you can't work without it. I know that as a network administrator, my job would be harder by a factor of ten without AppleScript. AppleScript gets more than just a new menu addition, and center ring with 10.1 however. It finally gets an industrial strength IDE and GUI creator from *Apple*. Yes, there have been many tools for doing really excellent work with AppleScript, Script Debugger, Smile, FaceSpan, and Scripter being the best available. But none of them were from Apple. Until the demonstration of AppleScript Studio, (Okay, they have to change the name. I really don't see Apple wanting to proudly market Apple's new ASS environment, although I can see the application for law schools and political science schools...), the only tool you could get from Apple was Script Editor, which was a nice little tool, but you couldn't do a lot of things with it...like debugging complex scripts and handlers easily.
AppleScript Studio changes this. Name aside, what this product shows is that AppleScript is now a peer language along with Objective C and Java. So, once this is released, the same environment that you can use to develop major commercial applications can be used to create AppleScripts. The same tools that you can use to create world-class GUIs for Objective C and Java now can be used for AppleScript. One of the greatest limitations of AppleScript was never the language, but the tool set. Apple finally gives you a tool set that is as powerful as you could ever want it to be, and this is going to change AppleScript's use in a major fashion.
But as our old friend Mr. Popeil likes to say, "Wait there's more..." You can also finally script the Terminal application. You have been able to run AppleScripts from the command line for a while now, via the osascript command. But now, there is finally a dictionary for Terminal. This is not the only option of course, and not the most full-featured, but it is there, and it is from Apple. You also get the ability to set scripts in the Finder window toolbar, and run them from there, as droplets, or by clicking on them. But even more important than that is the support for XML based protocols like XML-RPC and SOAP in AppleScript. This means that you can use AppleScript to interact with any type of service using these protocols. When you consider that a large amount of Microsoft's .net push is based on XML and SOAP, the reach that these capabilities gives scripters is essentially limitless, and means that along with workflow, you can actually use AppleScript for the same types of things that you would use other languages like Visual Basic for. AppleScript finally has the equipment to move into the major leagues, and AppleScripters everywhere should be grateful not only to Apple for doing this, but to folks like Sal, Cal Simone from Main Event software, Mark Aldritt from Late Night Software, Jon Pugh, and everyone else who has worked long and hard to get AppleScript to where it is about to go.
Well, that's it for this installment. We'll be continuing our tour of 10.1 in the articles to follow. I encourage anyone who has been holding off of Mac OS X to really think about installing 10.1 and getting used to it, and preparing for the results of the promises.
created 21 Sept. 2001
With all that has been going on with New York, DC, and the events of September 11th, it occurred to me that disaster recovery is suddenly a major issue to network administrators everywhere. While terrorist attacks are not the norm for a need to implement a disaster plan, the fact is, bad things happen all the time. People get fired, transferred, hit by a bus, (Don't dismiss that last one. At a seminar I taught, someone said that this had happened to the only network administrator with the domain root password. It took them a month to dig out from what an errant step caused.) Buildings burn down, they are destroyed by earthquakes, hurricanes, floods, people break in and steal things. The world is an imperfect place, and that is why you need a disaster plan.
Now, no one plan fits all, but in general, ask yourself, "How much work can we afford to lose?" Once you have answered that, then figure out what you can spend. This type of budget is the worst to justify, especially for recurring charges, as, if things go well, you will never actually use that service. The ROI on this type of thing is almost binary...it's either nil, or 100%. The problem is, you don't ever want to get to the 100% part.
So, what to think about? Well, first of all is your backups. This is the simplest part of a disaster plan, and the one that gets messed up the most. First of all, do restores. Regularly. Test that system. If you can't restore, your backups are a waist of time. Secondly, do off-site backups. Now, for smaller companies, this tends to mean "Bob's basement". Well, okay, professional storage companies can cost a lot, but if you are going to use the basement, invest in a good fireproof box. Even better, go talk to the local fire department, and ask them what container protects its innards the best. They are your best source for fire - related issues. But if you are going to do off-site backups, and you can afford it, use one of the professional storage companies, like Iron Mountain. These companies maintain secure facilities that are temperature controlled, etc. Basically, they make their money by keeping the data you send them safe. While they may be more expensive than Bob's basement, they are also more reliable, and if you ever need that data, you'll appreciate that factor.
So now your data is safe, but what do you put it on? You happened to build an office in an overly swampy part of central Florida, and this morning your sinkhole is front page news? Your data's safe...but now you have to go buy new servers to put it on. Obviously, you should think about backup hardware. While this can include offsite data centers, with OC lines, and constant data synchronization, you don't have to go that far. When you buy a group of servers, buy an extra one, set it up to make sure it works, configure it, put it back in the box, and send that box off-site to be safe. You don't have to upgrade it as often, although make sure it's capable of running your basic services. You may want to consider more than one server if your network requires it.
If you need to have a hot standby, then look at setting up a collocated server at a reputable ISP, like Digital Forest. The reputable part includes redundant network connections, heavy duty power backup, stable facilities, etc. Check the facility out in person, and make sure that they not only can handle your needs, but can do it in your timeframe. A good partner here will help you set up an integrated plan so that you have your backups close to the machines you will need to restore them to.
Another solution is live servers that are off site. This gets tricky, as you are talking about a lot of data synchronization, and the beginnings of a wide area cluster implementation. But, if you cannot afford any downtime, then it's not beyond the realm of reality. Unfortunately, this is one area where Mac users are currently out in the cold. This is not due to a limitation in Mac OS X as much as the fact that Apple isn't shipping the hardware that this kind of system needs, and no third parties are looking at it yet. (Although I think there would be a possible market here, for a high, or continuous availability Mac OS X cluster.) Some examples of what I am talking about are available from companies like Marathon Technologies, Sun Microsystems, IBM, Microsoft, and SteelEye. While not for everyone, if you need that level of availability, then this is an idea worth looking into.
But there is more than just the machines. Make sure that your admins are all cross-trained. After the flood has washed away your data center is not the time to realize that the only person who knows how to set up your new Cisco router is on sabbatical in the Antarctic. In other words, eliminate empire-building. Make sure that everyone with a need to know, well, knows. Don't have only one person with the password to the email server. Spread the knowledge. It will save you lots of pain in the long run. Do you have extra copies of your documentation available? If not, get some. That way, even someone who is unfamiliar with a given system has a fighting shot at getting it running correctly.
Finally, once you have your plan set up...rehearse it regularly. There is a story, probably apocryphal, about Jimmy Carter, during his early days as President. He was getting a briefing on the plan that was supposed to whisk the First Family to safety in the event of an attack on the White House. He thought it was a neat idea. So neat that it should be tested...now. It was a disaster. The helicopters assigned to the task weren't even flyable, and a host of other things that looked good on paper either were broken, or non-existent. In the end, a plan that should have only taken about thirty minutes, took many hours. The point to this is that if the plan only works on paper, then you don't really have a plan, do you?
While I, and everyone else at workingmac.com hope with all our hearts that the events of 11 September are unique in our history, other disasters happen all the time, and companies that don't plan for them go out of business all the time. Learn from their mistakes.
NASA and WEP
created 8 Sept. 2001
So, in recent days, we have seen that a NASA division set up a wireless network that deals with WEP's weaknesses. They realized that they cannot control where wireless packets end up going, and so, assuming this would happen, commenced to building a secure wireless network.
Did they create their own encryption? No
Did they reverse engineer the WEP protocols, and make them better? No
Did they create some scrambled RF zone, where you need a special antenna to get on the network? No, and if you thought that, stop watching so much science fiction.
They used commonly available security methods and layered them. Wow, what an idea. Instead of banning a useful tool, or restricting its use to the point of uselessness, they analyzed the problem, identified the problem, and took careful, methodical steps to fix the problem. My, how radical. The details of their implementation is available from the NAS NASA web site.
The implementation is simple, yet effective. DHCP and NAT is used to set up IP addresses, and log the MAC addresses of the users of the network. The DHCP server is a beta DHCPv3 server, which adds the ability to bar users from the network when a leas is released for any reason. The DHCP server only listens to the wireless network, so wired requests are ignored, and packet filters are used to lock out any other interfaces.
Nothing radical here.
The Wireless FireWall runs on OpenBSD, freely available to all. It uses the built-in IP Filtering, (IPF) capabilities of OpenBSD to restrict non-essential protocols. Non-secure access is also allowed for email, VPN, and web access. UDP and TCP level filters are also used, to minimize port access issues. Login is done via PHP CGIs, and when the user's IP address is recycled, then their access permissions are removed from the active user database, so that hijacking an IP address won't work.
Again, this is standard stuff here, almost security 101.
The Web Server login is encrypted via SSL, using a certificate issued by Verisign. User databases are kept on a RADIUS server. If the user authenticates, then that IP address is allowed access.
Oooh...this stuff is based on five or more year old systems. Stuff anyone can buy.
The authentication web server is only available via wireless. All remote shell logins are done via SSH. User's MAC addresses, along with the leased IP address, and login date/time are logged. So identifying who logged in when, for how long is easy.
Again, this is not CIA stuff. This is off the shelf stuff. NASA could have added a VPN for even more security, and can do so later if they like. Then only encrypted packets are ever sent. Kerberos could be added in here as well.
NASA even decided to drop WEP entirely, as it added overhead with no real benefit. This is how you do things. Don't panic, don't get stupid. Just identify the problem, figure out the best solution and implement it.
Hopefully others will follow this example, and we can get back to working with wireless networks without listening to the chicken littles squawking away.
created 24 August 2001
So, as we continue our look at the lack of great security in 802.11b's Wireless Equivalent Protection, (WEP) system, we see that a great new tool was released to help bad people get illicit access. AirSnort can get you a master password within a time period of between four and ten hours, according to a recent article on wired.com.
While this seems quite intimidating, let's take a look at it. First of all, to do this, you need to find a wireless network. This isn't impossible in general, but if you take advantage of the hidden network feature in AirPort, and other implementations, then you can't just browse for one. Now true, you can intercept wireless packets without being on the network, but that requires a custom driver for you to do that. Not brain surgery, but you don't find those kinds of drivers under your welcome mat either.
So you have to know that there is a wireless network in operation. Then you have to get in range. Easy, right?
Maybe, maybe not.
If you are talking about a coffee shop, or a convenience store, you can get in range relatively easy. Unless the building is made of steel - reinforced concrete, in which case, the range is dropped dramatically. If that's the case, and most commercial buildings are built this way, at least in larger cities,then you have to be in front of a window to get a good signal. Unless there's a sidewalk cafe nearby, someone's going to get a little curious.
But what about larger companies, that have their wireless networks up on the 10th, or even the 60th floor?
Hmm, well, now you need either a scaffold, or to be inside. I think that a guy sitting with a laptop on a scaffold outside someone's window is going to get noticed after 3 or so hours. At least I would hope so. If you want to work from the inside, then you have to find a place to leave a running laptop for a few hours, or you have to hide with it. At that point, you may as well just jack into an ethernet port with a copy of Etherpeek, and grab packets at a high speed.
Even at home, someone has to sit there and intercept data. For a couple of hours. People tend to notice a stranger in a van with a laptop. At least in my neighborhood they do. And that's assuming that you are getting full range on the base stations in the house. Depending on what part of the country you live in, your house may not be wood. Concrete block and stucco are murder on wirless range. I don't know about you, but a script kiddie lurking in my backyard is going to be talking to the police rather quickly.
Another point is that AirSnort can be used for an hour here, and an hour there. Okay, so now you have some strange person with a laptop and a wireless card coming back regularly for a few days, for an hour or so. Again, you are probably going to get noticed.
Even if they crack your code, then they have to capture the data. That takes time as well. You have to sit there and capture packets for a few hours, maybe even a few days, depending on what you are looking for. Then you have to analyze, and process that data into a useable format. This takes time. Think about it. Probably 70% of network traffic these days is email, and a huge chunk of that is talking about the latest idiot jokes, or Aunt Margie has gout. Decoding and analyizing packets is also not that easy. You don't have to be a network genius, but you do have to know a little about what you are looking at to make sense of it.
Hopefully, my point is coming across. Stop panicking already. Superman is not flying about with an iBook, Spiderman is not dangling outside your data center with a Titanium. If you aren't visible from the ground, then just make sure you know who is supposed to be in your place of business, and who is not. That's a part of security, the physical part, and if you don't have that, then why bother with the rest. If you are visible from the ground, and you see someone hovering outside your office every day with a laptop, kill your wireless hub, and see what happens. Change your keys regularly. Because every time you change your key, (as long as you don't recycle the keys), then the cracker has to go back to square one, and take another four to ten hours to crack that key. And then you change it again.
This is not to say that WEP is actually much better than it really is. It isn't. It is relatively easily cracked, and that's bad. But face it, in most cases, you aren't broadcasting packets across the eastern seaboard.
As with everything, use common sense, and some prudence, and you'll be fine.
General Security Theory
created 24 August 2001
In my last couple of articles, I talked about security as it applies to 802.11 wireless networking. This time, I thought we should talk about general security theory, as especially with Mac OS X having such fine Unix roots, security is, and should be, a constant topic for discussion.
I've had people ask me how do they know they are secure. The smarmy answer is, "You don't". There is more truth than attitude in this though. Security is a journey, not a destination. The folks trying to crack your system don't give up and go away. If you have data, or resources they want, they will keep trying until they don't want what you have anymore, or they find it somewhere else. They try a measure, you try a countermeasure, they counter that countermeasure, ad infinitum.
So right away, the first thing we need as administrators is knowledge. We need to know what our vulnerabilities are, and how to counter them. Things like the CERT advisory list, at http://www.cert.org/advisories/index.html are an excellent resource. The CERT advisory deals specifically with security breaches and holes. It is operating system agnostic, although after a while, you will notice a preponderance of Microsoft issues. Regardless of what systems you may run, if a legitimate security problem is found with that system, the notice gets sent out. Before OS X, things like CERT advisories happened to other people. But now, we need to keep up on these things. When you see holes discovered in BIND, Apache, Sendmail, etc., you need to take them seriously, and go about making sure that you either aren't affected, or that you take the appropriate action to patch the hole.
One of the problems this creates is that right now, the most common source for patches for Mac OS X is Apple, and although they are creating a better security infrastructure, Apple is still a bit too tightlipped about when they are going to fix a potential security problem. It's only recently that they released the Tech Note that lists the changes, including security related fixes, that were implemented in Mac OS X 10.0.1 to 10.0.4. Look for Tech Note TN2025 on Apple's developer site.
A good alternative for fixes and patches, or information on where to get them is securemac, which is devoted to security issues on the Mac, and Mac OS X in particular. This site is one of the best mac-focused security sites, and in addition to patch and fix information, is a good source of in-depth security articles for Mac OS X.
Even outside of obvious things like security advisories, there are ways for admins to get the knowledge that will make their OS X systems more secure. First off, go to Apple's developer site, and if nothing else, download the PDF version of the System Overview. It is the best starting point for learning about Mac OS X, and the best source of a lot of technical information on how the OS works, and why. Another good document to take a look at is the Network Kernel Extensions. Although a bit code-heavy, it is a good way to get a grip on how network and protocol drivers are implemented in Mac OS X.
Unix, specifically BSD is another area that administrators need to be familiar with. As the listmom for the Mac-Managers list likes to say, "You've had five years to learn Unix, why haven't you?". Although most normal users will never need to learn about Unix, network administrators are not normal. We need to know how this OS works, and why, and we need to know this at all levels. So go and get a book on FreeBSD, take a class, buddy up to a Unix admin, but learn about the plumbing in this OS. In all seriousness, how do you expect to be able to build a secure Mac OS X network if you don't know about the insides of Mac OS X. Yes, it's more complex than Mac OS X, and it's harder in some ways.
You don't have a choice. I guarantee, that when the first public crack of a Mac OS X box happens, the first phrase out of the person who was in charge of that Mac's mouth will be some variant of "I didn't know that....". Ignorance is no excuse for breaking the law, and it is no excuse for having an insecure system. Even if you haven't started implementing Mac OS X on your network, and especially if you haven't started this implementation, install it on a box you work with. Live in this system. Breathe Mac OS X. Don't use Classic unless you have to. Learn how this OS wants to work. Once you have a grip on what is going on with Mac OS X at all levels, find someone you can trust, and have them 'White Hat Hack' you. Set up a box with an Internet connection, secure it as best you can, and see if they can crack it. But don't just walk away from it and have them tell you they succeeded or not. Work with them, document what they do, the tools they use, and watch the box closely. See the affects that various crack attempts have on the box. See how you can spot them, and circumvent them on the fly.
Again, in a real world crack attempt, you may not get obvious notice like the machine crashing, or rebooting. It may be as subtle as an incorrect entry in a log, or a log disappearing when it shouldn't. Learn the symptoms, so you can spot them faster when it's for real.
Remember, security is dynamic, and so is a lack of security. You are never done, and never will be done. So learn, learn, and learn some more. Next time, we'll get into some specifics about security and Mac OS X.
Access Security vs. Data Security
So, in my last article, I talked about the Wireless Equivalency Protection scheme, or WEP. However, judging by some of the comments I received, there are some misunderstandings about the differences between WEP and other security measures available in networking, so I thought I'd take some time and talk about them.
There are basically two aspects to network security, access and data. By access security, I mean controlling how people get access to your network. For example, if your network is wired, you don't just let anyone run a cable into a spare ethernet jack. That's an example of access security. Logins, secure signons, Kerberos, these all control access. They don't restrict what you can do once you are on the network, they simply are ways to make sure that only those people whom are supposed to be on the network are in fact, the only people actually on the network.
Wireless networking has some access security capabilities as well. When you restrict access to your network by only allowing a specific set of MAC addresses to access that wireless network, that's access security. If you use 'hidden', or more properly, non-browseable networks, that is another example of access security. WEP fits in here as well. Even though WEP has encryption features, the password mechanism is more of a data access feature. if you don't have the password, you cannot even connect to the network, never mind the data.
Access security is the first level of security, and is the level that most folks can relate to. If you lock your doors and windows, burglars cannot get in, and your home is safe. Most of these measures are fairly simple, and at worst, tedious. They are easy to implement, and tend to stay out of your way.
The problem is, access security isn't enough. Yes, the door is locked, but if the burglar gets your keys, or picks the lock, they are in the house. Now, you can implement an intrusion detection system, which, like an alarm, screams its head off if the locks don't work. Email, pages, even phone calls are used to alert you that someone is breaking into your network. The problem here is the same one in our house analogy. Intrusion detection is an art, not a science. Like the alarm system that detects the cat constantly, intrusion detection systems, while a good way to bolster access security, can be tough to tune correctly, and a bear to use.
So how then do you keep your data safe, once the burglar is in the house? By encrypting that data. This would be like folding the rooms of your house along a four-dimensional set of rules. Even if the burglar gets in the window, what's inside is so messed up that they may as well just give up and go burgle the neighbors.
That's what data encryption, such as VPNs and PGP do for you. Via encryption, the data is rendered useless to anyone without the decryption key. So even if they capture your data, it's still useless to them. Now, like access security, there are a few different ways to go about this, none of them mutually exclusive.
The first method is to encrypt the data at the file/folder level. Fold the rooms, invert the hallways. Things like PGP, Apple's File Security, etc. work in this space. They are easy to use, and can encrypt everything from a small text file to your entire hard disk. The problem with this method is that if you lose the key, or the encrypted file is corrupted, you have essentially lost that data. So while this is good for small numbers of items, to do this on thousands of files is tedious.
The other answer is to encrypt the connection. This also is a bit in the domain of access security, but not as much as it is an encryption/data security model. This method is analogous not to folding your house in four dimensions, but rather doing this to the road between you and your office, and leaving your home and office hidden and protected via other methods. So now, the burglars can't find your house, and they can't even see the street. This is a very simple way to do things, requiring minimal client configuration, and keeps you from having to worry about data corruption as much, and is the domain of things like VPNs, (Virtual Private Networks) and SSH, (Secure Shell.) Again, there are disadvantages to this method. While you can have a software based VPN, if you have a large amount of VPN users, a hardware solution may be your best bet. There are still a couple of competing standards for VPNs, (IPSec and PPTP), and they don't interoperate. Client support is not consistent between different VPN implementations. If a burglar gains access to the VPN, they may have unfettered access to your network. Finally, the implementations of the VPNs vary wildly between vendors in terms of standards compliance and useability.
In the end, there is no magic bullet that will secure your network perfectly. So you have to use layers of defense, picking from the methods that serve your needs the best. A combination of access and data security is usually best, and is the least hardest to use and maintain. Hopefully this gives you an idea of where to start.
WEP is not encryption
created 8 August 2001
So, yet another article on how someone has managed to hack the Wired Equivalent Privacy, (WEP) encryption for 802.11b wireless networks, aka Wi-Fi and Airport. Once again, the clamor is going up about how the IEEE is using shoddy encryption, and that they are leaving the poor consumers and users of 802.11b networks open for the foulest kind of violations.
Well, that's only partially true, and most of the panic deals with an essential misunderstanding of what WEP is, and some less than perfectly forthright marketing by the wireless networking dealers.
First of all, WEP is not, nor was it ever meant to be, an industrial data security algorithm. It was never designed to protect your data from script kiddies and more intelligent crackers, who want to discover your secrets. It is designed to make up for the inherent insecurity in wireless transmission, as compared to wired transmission. One of the problems with wireless transmission is that it's omnidirectional. (This is not always the case, but it is for 802.11b in general, so we'll leave out laser and microwave transmission media.) When you have a wireless network, all the base stations and end nodes are transmitting all packets in a sphere, regardless of where you may want them to transmit. In general, this sphere is about three hundred feet in diameter, although external and other factors can limit, or enhance this. So when you imagine your wireless network, it is important not to imagine a web of lines from point to point, but rather a series of interconnected bubbles, rather like the foam from a bubble bath.
This 'bubblenet' has some rather serious security implications. First of all, you have to deal with the fact that you are broadcasting packets in a sphere, and that anyone inside this sphere with the correct equipment can receive those packets. (Since 802.11b is essentially wireless ethernet, and ethernet is a broadcast medium, this 'packets, packets, everywhere' should come as no surprise.) Secondly, you can sit some distance away from where someone else is transmitting, and receive these packets, and store them. Thirdly, given enough time, any form of encryption is breakable, all you need is desire, patience, and know-how.
So, for most of us, the idea of flinging our data to the world in a completely open format is 'a bad thing', but what can we do? Well, the basic answer is WEP. By using up to 128 - bit encryption keys, WEP allows you to make sure that your data is at least as secure as unencrypted wired ethernet. This is an important distinction, so we'll repeat it: WEP makes your data as secure as it would be on an unencrypted, wired, Ethernet network. That's all it is designed to do folks. It's not designed to repel attacks, keep secrets, hide data, etc. All it does is make sure that you are not more inherently less secure because you aren't keeping your data in a wire. The problem occurs when people see the word 'encryption' and make assumptions on it. The fact that most vendors don't really talk about what WEP is prominently doesn't help. Apple's main AirPort site almost doesn't mention this at all, but if you search in the Knowledge Base for "AirPort Security" then you are taken to the following section in the "AirPort Wireless Communications: FAQ - Part 1 of 3":
8. What kind of security does AirPort provide?
Answer: AirPort offers password access control and encryption to deliver security equivalent to that of a physical network cable. Users are required to enter a password to log on to the AirPort network--and, optionally, an additional password for access to any other computer on the network. When transmitting information, AirPort uses 40-bit encryption to scramble data, rendering it useless to eavesdroppers.
Well, the first line is the accurate part. The last line should include the word 'casual' before the word 'eavesdroppers'. Farallon barely mentions WEP beyond the fact that their SkyLine product supports it, and uses 40 - bit encryption. They talk more about passwords and configuration for security. Probably the best explanation of what WEP is, and is not, comes from the Wi-Fi folks themselves, in a PDF available from their site. This very clearly dictates what WEP is supposed to be, in point number 2 of this PDF file:
The goal of WEP is to provide an equivalent level of privacy as is ordinarily present
with an unsecured wired LAN. Wired LANs such as IEEE 802.3 (Ethernet) do not
incorporate encryption at the Physical or Media Access layer, since they are
ordinarily protected by physical security mechanisms such as controlled entrances to
a building. Wireless LANs are not necessarily protected by this physical security
because the radio waves may penetrate the exterior walls of a building. IEEE 802.11
decided to incorporate WEP into the standard to provide an equivalent level of
privacy as the wired LAN by encrypting the transmitted data. If this goal were
achieved, then higher layer security mechanisms that were developed for wired
LANs would work with no modification on IEEE 802.11 wireless LANs. It is important
to emphasize that WEP was never intended to be a complete end-to-end security
solution. It protects the wireless link between the client machines and access points.
Whenever the value of the data justifies such concern, both wired and wireless LANs
should be supplemented with additional higher-level security mechanisms such as
access control, end-to-end encryption, password protection, authentication, virtual
private networks, or firewalls.
So, what do you do to provide real security? The same things you would do with Ethernet, or dial-up access. You use real encryption, like PGP, you use VPNs, such as those available from Cisco. In other words, you don't turn off your normal security practices just because you saw the word 'encryption'. You research what WEP is, and augment it with the procedures you use over the rest of your network. You maybe decide to limit WEP-encrypted traffic to non-mission-critical items. In other words, you "plan twice, implement once", to paraphrase Bob Vila, and thereby avoid the problems that happen to other people.
Macworld New York 2001 part 2
created 29 July 2001
In my last column, I took a look at the MacWorld Expo 2001 keynote from the IT perspective, and saw that it wasn't nearly as bad as some would have you believe. So now, we should look at what was on the show floor that would be of interest to the networking professional. (Just because it comes up, there is no particular order to this, just my own meanderings across the Javits.)
4D was busy showing off the latest incarnation of WebSTAR, long a Mac server favorite, and until Mac OS X, the only product that allowed you to run a professional web site under the Mac OS without trying to run another OS. Now WebSTAR has competition of the highest caliber, namely Apache, yet the 4D folks don't seem to realize that WebSTAR is a "dead" product.
Maybe because it isn't.
WebSTAR 5 isn't a Carbon or a Cocoa application. It's a BSD application, with a Java front end. The reason it isn't Carbon is because, according to 4D, it was just going to take too long to port. Even minor changes required by Carbon would break a dozen other parts of the application. So it was decided to rewrite the program entirely. Cocoa was a bit too overblown for the lower level parts of WebSTAR, so writing it as a BSD application was the best fit for function. One of the biggest advantages to WebSTAR was the fact that for all purposes, it was uncrackable. 4D says they are taking a number of steps to make sure WebSTAR V lives up to its predecessors reputation. WebSTAR V will have its own security database, so that if someone cracks NetInfo, they still don't have the passwords to WebSTAR services. WebSTAR also doesn't run as root, so even if you exploit the WebSTAR application, you still don't have total access to the box. WebSTAR isn't part of the inetd services, so another potential link for crackers is severed there. Finally, and most importantly, 4D is doing a code review of WebSTAR V, just to ensure that there are no coding mistakes that leave a door open for crackers. The initial release of WebSTAR V won't include an email server. 4D explained that the earlier WebSTAR email server was never one that they were happy with, and felt that it was better to not have it in the first Mac OS X release of WebSTAR, and take the time to rewrite it to a higher level of quality.
Although it was hidden in Apple's Small Business Solutions center, the demos of a Mac OS X - native Palm Desktop was gratifying to see. Although not a networking application in a direct sense, as any network manager knows, Palm, and its products are a large part of the corporate computing landscape, and the lack of Palm synchronization conduits for Mac OS X has been a big reason for not migrating to the new OS. Although I was not able to talk to anyone from Palm at the show, my noodlings with the Desktop application showed it to be in decent shape, although I couldn't get the HotSync manager application to run, and didn't have anything to test it on or with even if I had. Having recently bought a Kyocera SmartPhone, which is a Palm Pilot integrated into a cell phone, I would hope that by the time the new software is released, the Palm cradles finally abandon serial for USB support without the need for an adaptor.
Given the number of times I have had to build custom boot CDs, I felt a stop by the Roxio booth was needed. As they had just released a Mac OS X - native preview of Toast Titanium, I asked them if they had gone with their earlier plan of not waiting for IOKit support from Apple, which they have, at least in the current version. I would love to be able to tell you if it works, but my installation dies while attempting to force LoginWindow to quit. Since this is the application that controls your session while logged into Mac OS X, I think I will be waiting for the next beta to see how well Toast 5 works as a native Mac OS X application.
Faxing is an area that Mac OS X users have had some concern over, and while I did not talk to the Smith Micro folks about the upcoming FaxSTF release for Mac OS X, I did talk to the new owners of 4-Site Fax, a fax server for the Mac. They are approximately two months from a native Mac OS X release, and the capabilities of the new version are extremely cool. Each server will be able to support up to sixteen fax lines, which gives Mac OS X the ability to handle a rather large number of in and outbound faxes on a single box. The client now supports TCP/IP, and a Mac OS X server can handle both Windows and Mac clients, making 4-Site able to handle faxes across the Internet, not just on AppleTalk LANs. Email integration is improved, giving users the ability to have faxes emailed to them as PDF documents. Faxes can also be sent via email, so for basic fax service, the client doesn't need to be installed on all machines in a company. But the coolest part is the fact that the client is written in Java. I asked the folks at the booth if this meant that any system with a supported Java implementation, such as Solaris, or Linux, etc. could run the client, and the reply was, "We haven't tested it, but I don't see why not." If the client is able to run on multiple OS's, with the only requirement being adequate Java support, then this could quickly make Mac OS X a major force in the fax server market, as it would be the only fax server with near-universal client support. Odd though it may seem, this was one of the most exciting announcements of the Expo for me.
Dartware LLC was showing off their Mac OS X - only Carbon version of Intermapper, the outstanding network management tool of the show. This tool has enough features and capabilities as to give much larger and more expensive products a run for their money, and giving it a Unix background to work with has only made it better. Even more important than Intermapper, although not as visually cool, was the release a few days earlier, by Dartware, of an Mac OS X port of the Net-SNMP package. This is a critical item for Mac OS X to have, and by now having a full - scale SNMP daemon package available, integrating Mac OS X into almost any network management setup just became much easier. SNMP, although not the best way to run a network, is almost the only universally supported way to run a network. This was a real obstacle for corporate network managers who wanted to deploy Mac OS X on their networks, so having it freely available is a major plus for the OS. As well, since the copyright is based on the BSD (truly)Free software license, as opposed to the GNU (pseudo)Free software license, other companies can integrate the code within their products without having to worry about being forced to release the code for the result of that union. This makes it far easier for Apple to integrate the Net-SNMP package into Mac OS X and Mac OS X server. Not a sexy application, but a critically important one.
Finally, Omnigroup released a new revision of OmniGraffle, bringing it up to version 1.1. This is an excellent product, and far more deserving of the accolades showered upon OmniWeb. Yes, I know the Mac OS X orthodoxy loves OmniWeb, but it's a web browser, which is not exactly an under represented field on the Mac. A product that makes me forget about my lust for Visio however, and does it in a small, tight package with an elegant interface, and that doesn't cost me an arm and a leg per copy is just amazing. Considering that to get from first code to current release has taken Omni less than a year, I really see OmniGraffle as far more of an example of what you can do with Cocoa than OmniWeb. About the only things I still miss from Visio is auto-attachment of lines between shapes, a far bigger network diagram palette, and the ability to auto discover devices on a network, and diagram them. I think that for the last part, Omni should talk to Dartware about linking OmniGraffle in with Intermapper, which already does this, and is in desperate need of prettier icons.
I know I missed quite a few products, such as ConceptDraw, and others, but I plead space limitations as my defense. Thanks for reading!
Macworld New York 2001 part 1
created 24 July 2001
So, MacWorld Expo New York has come and gone, and now that I've had a week to think about things, here's my observations.
First of all, I'd like to thank everyone who came to the sessions and workshop that I was putting on, either solo, or with the most excellent help of Dave Every, (The OS X In Depth Workshop). The comments I/we received, both good and critical are greatly appreciated. I really think that while the floor show at Macworld Expo is neat and fun and loud, the sessions, both Pro and User are where the real value of the Expo lives. In my experience, what makes them valuable is that they are not marketing sessions, or ads, as the keynote is. They are put on by people who work with that product or concept, and want to share the knowledge with others in the Mac community. We don't get paid for sessions, IDG doesn't fly us out or put us up. We get a couple of extra passes for folks, and that's pretty much that. We do it because we want to, and because it's fun. So for San Francisco, try a session or three, it's a great way to get more from the money you spend there.
As far as the keynote goes, I must either have low expectations, or be far more cynical than most. I didn't find it that depressing or bad. It was a Stevenote. Sometimes there are lots of new toys, sometimes there aren't. I like the new case design on the G4, although there were a few times when, while watching people...touching the speaker in the case that I thought maybe they should get a room. The speed bump in the iMac, and the return to more conservative colors was no surprise either. That may be what really got to people, the lack of surprise. If that is the case, well, no one promised you that Steve would be juggling chainsaws and whistling 'Dixie'.
But that's not to say that the keynote was dull at all. From my IT geek perspective, this was a really good keynote, just more of a maintenance update. The 10.1 update seems to be what we have wanted, although in my brief look at it, I was quite disappointed to note that there is still no interface into the OS X version of the Internet Control panel, at least in the current build. Setting web browsers and email clients is all well and good, but I need to be able to set MIME settings, application and file links, and I need to be able to do it soon. There were a few more items that I wanted to see, but between lack of time, and the Apple folks amazing skittishness around members of the press, I didn't get a chance to do so. So it may be September before I get a chance to check out AppleScript, and other such items.
The SMB client built into the system also didn't get looked at much, although the opinions on it from those who had seen it indicated that while the checkbox is filled, that's about all it's good for. Then again, in two months, much can happen. What was most gratifying to me was that 10.1 shows that Apple is still listening to its customers, and that our concerns are being addressed. There are a lot of third party system and dock hacks that are now part of Mac OS X, so if you still have concerns once you've had a chance to bang on 10.1 in September, by all means, let Apple know. Nothing is ever carved in stone, even at Apple. Especially at Apple.
The demos at the keynote were a mixed bag. Tony Hawk strikes me as Tomb Raider on a skateboard, so I'm not terribly impressed, but that's a personal preference, not a rating. The Office for Mac OS X demo was impressive. Microsoft is determined to set the standard for exactly what you can do with Carbon, and it shows. The Office suite looks much better than its Windows counterpart, and has, once again, shown that regardless of what the Windows side of Microsoft may do, the Mac Business Unit is writing topnotch software. For me, the coolest part is the non-contiguous selection features that Word now has. Formerly a Nisus Writer exclusive, this means that you can select separate chunks of text or other objects in a Word file, and affect them as one. While not a feature you would use a lot, when you need it, it's really cool to have.
The FileMaker demo was impressive more for the fact that FileMaker Pro server will be a Cocoa, not a Carbon application. (In talking with folks on and off the floor, if the app in question is older, or complicated, I think you will see more rewrites happening. There are times when it's easier to recreate than to modify.) The World Book demo, while neat, struck me as the obligatory nod at the education market, to show them that Mac OS X will have applications in their world as well. Virtual PC on X was a good choice, and I think the Unix plumbing will be a particular boost to VPC's speed and usability.
I was quite amused by the non-reaction Quark generated, but is anyone really surprised? Quark 5 has been vaporware for a long time now, they still cannot get it out the door, and now we find that they are working on a native Mac OS X version as well? They may as well skip the classic version and go straight to the Carbon version, heaven knows it couldn't delay it noticeably. (Tip for vendors: If you are behind on a product, Don't demonstrate the next version. It makes you look dumb.) While Maya was, as always an impressive demo, it was less interesting to me than the Adobe demo.
Quite realistically, there are very few products that Adobe could have running natively now with OS X in its 10.0.4 condition. The printing problems in the OS prevent Photoshop, Illustrator, InDesign, Pagemaker, or FrameMaker from running natively. The delays in IOKit hurt the scanner acquisition parts of Acrobat and Photoshop. So for those applications, seeing them now on Mac OS X would be a huge disappointment. (I also enjoyed the 'shades of Publish and Subscribe' Illustrator/GoLive demo.)
But what about GoLive? Seriously. I mean, in the end, it's a text editor with some movie capabilities. All of this exists and works in Mac OS X. GoLive doesn't need great printing or scanner access. You can test your pages with five browsers that run natively in various states of beta. There is nothing in GoLive's mission keeping it from being native right now. With Apache shipping as part of Mac OS X, along with FTP, GoLive looks like even more of a logical choice for a first native application, certainly better than Acrobat Reader. Mac OS X is screaming for a native high-end web design product, and neither DreamWeaver or GoLive is there yet. I think that given the features of the OS, GoLive should have been a no brainer, certainly more than Acrobat Reader. But then, I don't get to make these decisions.
John's Fun Hack Tip: If you want to get Acrobat 5.0 to run as a native Mac OS X application, open up Acrobat Reader in ResEdit, and copy the 'carb' resource from it into Acrobat 5.0. Abracadabra, you don't need Classic to run Acrobat. Of course, it's not fully ported yet, so you lose a lot of functionality, and if you somehow hose up your copies of both applications, yelling at me won't do you any good. But it's a neat trick.
I was pleased to see the ViaVoice demo, although I still think that for general use, it's less attractive than most folks think. Imagine a cube farm full of folks talking to their computers. Not a pretty sight. But for those who need this software, it's indispensable.
So that's my take on the keynote. I left off iDVD, the camera throwing incident. These have been covered to death, and the really don't figure into my world. Next up, some choice products from the show floor.
created 7 July 2001
Well, it's a new web site for Mac users, and a new opportunity for me to talk with some folks who I may not have chatted with before, so for my first column, some introductions are in order.
As you can see from my bio, I've been writing for a couple of years for various publications, some still around, such as MacTech, and some not, such as the late MacWeek.com. I've been working on Macs since they first came out, and for the most part, I've always been drawn more to the hardware and support side of the platform. How it works, and how to fix it when it breaks are the things that I gravitate towards on the Mac. I've spent the last ten to fifteen years in the IS/IT arena, always dealing with Macs, but also dealing with almost every other computing platform. I'm currently a trainer for Complete Mac Seminars, so I'm now taking that knowlege and experience, and working on giving it to others who can benefit from it. I helped write a book on Mac OS X Server 1.x, and hopefully, that won't be the last one I get to work on. I've also been a Pro Conference presenter at MacWorld Expo since 1999, speaking on Mac networking and management issues, and also serve on the MacWorld Expo Conference Advisory Committee, helping to insure a steady flow of relevant presenters and content for the Expo's sessions. In addition to that, I also serve as a forum moderator for MacFixIt.com
The reason I'm giving you a light version of my background is so that you, the reader, have an idea of where i'm coming from with my articles. I also think that it's nice to know if a writer has any real-world experience with the issues and subjects they pontificate about. It also helps to give you an idea of where I'm going to be aiming my articles. While I will always attempt to be as clear and concise as I can, there are times that I am going to delve into some fairly technical issues. I will attempt to explain things that I think need explaining, but I will also assume some basic knowlege on your part depending on the subject. So, if you are new to networking, and you see a column on VLAN implementation 'gotchas', there may be some things that sail right over your head. I'll make no apologies for when that happens, but my email address is on the site, and I welcome questions, and constructive criticisms. Those who have seen my work on MacWeek.com will remember that I have no fear of flames, and I will happily get as loud and obnoxious as you wish to get. But that's never that much fun, so if I say something you disagree with, then let's argue in a constructive, intelligent way. That way, we can both learn something.
I view writing on the web as a two - way street. I think that it's important to communicate with you, not at you. So if you have a problem, or product, or idea that you want to see talked about in a wider circle, let me know that too. I have gotten, and hope to get, many good ideas for columns from readers. From a strict corporate since, you folks are my customers. You may not always be right, but you are always customers. So I take your comments seriously.
I'm also going to talk a lot about non-MacOS paltforms and environments. The days of pure AppleTalk, Mac - only environments are gone. Forever. Accept that. The world of computing now is inherently hetergenous. This is a good thing, because in the end, No One Tool Can Do All Jobs Equally Well. Even Mac OS X, while a phenomenal operating system, is not the perfect tool for all jobs. In fact, for some, it may be the wrong tool at all levels. If I think that is the case, then I'll write that. It won't happen terribly often, I think Mac OS X is that good of an operating system.
One caveat: If you are looking for any kind of NDA news here, you are going to be disappointed. To me, NDAs are a form of trust. The company issuing the NDA is agreeing to give me access to X or Y in exchange for me not talking about it. If I violate that trust, then companies stop talking to me altogether. In the long run, that hurts me, because then I can't talk to people even without NDAs, and it hurts you, because my subject matter leaves the realm of fact, and enters the land of speculation. When that happens, you can no longer trust the integrity of what I am writing. Which then begs the question, why read someone who is so obviously untrustworthy?
Finally, in the end, this is a column. 90% of it is my opinion on something. Feel free to disagree, feel free to try to change my mind. But if you can't, then let's agree to disagree, and have a good time doing it. In other words, if I don't agree with you, it's not personal. So if i trash your favorite product, it's not persanal against you, it just means I didn't like the product. Guess what? We can disagree, and stil both be right.
Application Packaging in Mac OS X
Created 10 July 2001
Application Packaging in Mac OS X
Just to start off, this is not an explanation of how to package an application in a Mac OS X bundle, nor is it a detailed look at how packages work. It's a higher - level look at application bundles in Mac OS X, from a user and administrator perspective, rather than a programmer perspective.
One of the more radical changes that Mac OS X brought the Mac community was how our applications are put together. While they still appeared the same, application file with an icon, double - click to run, etc. In fact, they are beginning to be built differently. Instead of a monolithic executable file, with support files scattered hither and yon, what we think of as a application file is, in Mac OS X, more often a folder. (The reason the folder looks and acts like a single application file si dependant on the Finder's bundle bit. If it is set one way, the bundle is an application, otherwise it's a file. So finally the bundle bit has an obvious use.)
Within the folder are a series of folders and files that take the place of the resource and data forks of Mac OS 9. The big advantage to bundles is that they are single forked. So, if you are transferring a bundle to another Mac OS X machine using UFS, instead of HFS+, the resources are not destroyed, as they would be with Mac OS 9 applications. (In fact, Mac OS X does handle this situation, through splitting the forks into separate files, and then keeping track of them. This causes extra work on UFS systems, which is one of the reasons that UFS under Mac OS X is somewhat slower than on other Unixen.) Even more convenenient, regardless of the server type, or file system type, you can now have applications transferred from your Mac to those systems without needing things such as Binhex, or MacBinary. This makes using WIndows, Novell, and Unix servers much easier, as they no longer need to work as hard to preserve Mac files.
Other advantages to bundles are that under Mac OS X, Sherlock can index the text files in Bundles, and that one bundle can have all the localization strings and other information that an application needs to operate under more than one language. As well, if the developer is able to add language support, then all they need to do is copy in the new language's files to the bundle, and the language is added. No OS modifications needed.
Bundles make application installs and uninstalls easier by allowing the application, if built correctly, to keep all the files that an application needs in the bundle. Plugins, libraries, almost everything except for user preferences can be in the bundle. This reduces installation to a file copy, and uninstallation to dragging the application to the trash and emptying the trash. Especially in a network, and with applications like Photoshop, which in their Mac OS 9 incarnations can have thousands of files in multiple locations on your hard disk. While the file counts may not go down with Bundles, the location counts will go down. This makes network installs and uninstalls far simpler, even for Photoshop - sized applications. Thanks to the dynamic nature of bundles and Mac OS X, plugins can be added to a running application, and be useable without having to restart the application, making plugin usage much easier and immediate.
Application updates are made far simpler as well. For example, in Mac OS X, I use the free Java Configurator to set up my Airport Base Stations. When I started using it, the version was 1.4. There was an update released that took this to 1.5. To update my application, all I did was take the .jar file for the update, and copy it into the appropriate place in the bundle, start the application, and there it was, an updated Java Configurator. So for developers, this makes updating code within a bundle nothing more than a file copy. Which can be done in a way that even if the Mac shuts off due to a power outage in the middle, means that you could still have a working application, albeit not an updated one. So bundles help make updating applications not only simpler, but safer.
Carbon applications can make extensive use of bundles as well. If you look inside of AppleWorks, you see a directory for the Mac OS 9 code, and the Mac OS X code. But many of the libraries used internally to AppleWorks only have one version, and are aliased so that both versions can use the same files, saving space and size. As well, AppleWorks has its own internal copy of MacLink Plus translators. These only exist inside the bundle, since only AppleWorks needs to use them. Language files exist the same way, as subdirectories of the bundle.
By correctly using a bundle structure, developers can greatly increase their re-use of code, which gives us smaller, tighter applications, which having less code, can be less buggy, in theory at least.
So bundles, while odd to Mac users, have significant advantages that can make Mac OS X far more useful to us than it may otherwise be, and to make our lives as Mac users just a little easier. Which is why we use Macs in the first place, isn't it?
Don't read this stuff for currency, or even good editing. The stuff here is the raw, pre-edited material.
Note: A good editor is FAR more valuable than a good writer. Any fool can write, it's the editor that makes the writer look like something besides a fool.
But I notice that often, anything more than about six months old is gone from the Internet, and that's a shame. There's a history with anyone and anything on the 'net, and I don't think that should be so easily discarded.