Perhaps the most surprising change for Winter ’14 is the elimination of script limits, to be replaced with a single CPU time limit for each transaction.

This is an extraordinary change, and it’s worth taking a few minutes to explore the consequence, both long term and short term, of this decision. Keep in mind, that what follows are my preliminary thoughts – I’m still somewhat in a state of shock 🙂

In the immediate future, I don’t expect this change to have any impact. I believe SFDC when they say they’ve analyzed the situation and that no current code will exceed the CPU time limits.

To understand the long term impacts, let’s consider the real meaning of this change.

  • First, managed packages will no longer have their own set of script limits, or their own CPU time – CPU time will be shared among all managed packages interacting with a transaction and code native to the organization.
  • Second, my understanding is that time spent within Salesforce code counts as CPU time. Up until now, script limits only impact your code – a long running built-in operation such as a sort or RegEx would only count as a single script line.

This will obviously have an immediate impact on how one might code for efficiency. Your code can be more verbose – there will be less need to build complex conditional statements that are barely readable in order to cram everything into one line of code. Not having to trade-off readability for efficiency will be very nice.

For the first time Apex developers will need to care about the efficiency of the built-in Apex class code. This will be a whole new topic for discussion, as the community gradually discovers which classes and methods are good, and which should be avoided, and when.

The real question comes down to what happens going forward – say, six to twelve months from now. Without the script limits, the pressure to optimize code will be reduced, and I’m sure we’ll see code appear on orgs that would never have survived in the current system.

As an ISV partner, this brings up an interesting question. What happens when some of that bad code, either on an org or in another package, uses up most of the CPU time, and when it becomes time for my package to run, limits are exceeded? Running a debug log with profile information should presumably allow identification of the greedy piece of code, but how many sys admins will take the time or trouble to actually figure this out? It’s so much easier to blame a package – possibly the one unfortunate enough to have tipped the CPU limit.  As this occurs more and more often, one can envision a case where customers gradually lose trust in applications in general, never knowing if one can be safely run. Ultimately this could impact trust in the platform overall.

Arguments that the proposed CPU time limits are generous (and they are are), don’t (so far) address the well known fact that software inevitably expands to use available CPU time (often because it’s expensive to optimize code and therefore often not done unless it’s necessary).

There seem to me three possibilities going forward.

  1. There is a real commitment within SFDC to build infrastructure to support inefficient code, so the performance will increase faster than the spread of inefficient code. (And don’t try to convince me that people won’t write inefficient code ).
  2. The amount of headroom in the current CPU limits really is so great that it pretty much takes an infinite loop to exceed it. (I’m sure I won’t be the only one experimenting with this in days and weeks to come).
  3. The engineers who made this choice are deluding themselves that all Apex developers will continue to write efficient code even when they don’t have to.

As an ISV partner who ships a very large application, I confess that the relaxed script limits are definitely going to make life easier. At the same time, I really hope that when CPU time limits are exceeded, they don’t just post an error blaming the application that tripped the limit, but rather more detailed information that explains to users where the CPU time went – so that it is easy for clients and vendors alike to quickly focus on the code or package that deserves the blame.