Perhaps the most surprising change for Winter ’14 is the elimination of script limits, to be replaced with a single CPU time limit for each transaction.
This is an extraordinary change, and it’s worth taking a few minutes to explore the consequence, both long term and short term, of this decision. Keep in mind, that what follows are my preliminary thoughts – I’m still somewhat in a state of shock 🙂
In the immediate future, I don’t expect this change to have any impact. I believe SFDC when they say they’ve analyzed the situation and that no current code will exceed the CPU time limits.
To understand the long term impacts, let’s consider the real meaning of this change.
- First, managed packages will no longer have their own set of script limits, or their own CPU time – CPU time will be shared among all managed packages interacting with a transaction and code native to the organization.
- Second, my understanding is that time spent within Salesforce code counts as CPU time. Up until now, script limits only impact your code – a long running built-in operation such as a sort or RegEx would only count as a single script line.
This will obviously have an immediate impact on how one might code for efficiency. Your code can be more verbose – there will be less need to build complex conditional statements that are barely readable in order to cram everything into one line of code. Not having to trade-off readability for efficiency will be very nice.
For the first time Apex developers will need to care about the efficiency of the built-in Apex class code. This will be a whole new topic for discussion, as the community gradually discovers which classes and methods are good, and which should be avoided, and when.
The real question comes down to what happens going forward – say, six to twelve months from now. Without the script limits, the pressure to optimize code will be reduced, and I’m sure we’ll see code appear on orgs that would never have survived in the current system.
As an ISV partner, this brings up an interesting question. What happens when some of that bad code, either on an org or in another package, uses up most of the CPU time, and when it becomes time for my package to run, limits are exceeded? Running a debug log with profile information should presumably allow identification of the greedy piece of code, but how many sys admins will take the time or trouble to actually figure this out? It’s so much easier to blame a package – possibly the one unfortunate enough to have tipped the CPU limit. As this occurs more and more often, one can envision a case where customers gradually lose trust in applications in general, never knowing if one can be safely run. Ultimately this could impact trust in the platform overall.
Arguments that the proposed CPU time limits are generous (and they are are), don’t (so far) address the well known fact that software inevitably expands to use available CPU time (often because it’s expensive to optimize code and therefore often not done unless it’s necessary).
There seem to me three possibilities going forward.
- There is a real commitment within SFDC to build infrastructure to support inefficient code, so the performance will increase faster than the spread of inefficient code. (And don’t try to convince me that people won’t write inefficient code ).
- The amount of headroom in the current CPU limits really is so great that it pretty much takes an infinite loop to exceed it. (I’m sure I won’t be the only one experimenting with this in days and weeks to come).
- The engineers who made this choice are deluding themselves that all Apex developers will continue to write efficient code even when they don’t have to.
As an ISV partner who ships a very large application, I confess that the relaxed script limits are definitely going to make life easier. At the same time, I really hope that when CPU time limits are exceeded, they don’t just post an error blaming the application that tripped the limit, but rather more detailed information that explains to users where the CPU time went – so that it is easy for clients and vendors alike to quickly focus on the code or package that deserves the blame.
Just got these showing up in our sandbox org today. Automated tests pass at certain times of day, and not at others. Which is helpful… Any hints on debugging welcome
First, turn off Parallel Apex testing – depending on what the tests are doing, it absolutely can cause false errors. Beyond that, I’m also seeing failures to run tests if there are too many Apex test methods in a test class, and I’m still seeing script limits. So it’s clear things aren’t quite working as expected. What kind of failures are you seeing?
I encountered this error today morning:
Test failure, method: XXX — System.LimitException: Apex CPU time limit exceeded stack Class.
I was happy to see the winter release has remove the apex code limits. But a question in my mind lit, why is the test class class being effected with this release, I was expecting the test class to execute perfectly since the test class never counted to the lines of code.
First, test classes have always been subject to script limits. In terms of what you are seeing – I think it’s too soon to say. At the moment SFDC has not actually switched over to CPU limits from script limits on the Winter 14 orgs I’ve seen. I do know of at least one issue where CPU timeouts are occurring in test classes where there are multiple test methods in a test class, but have yet to hear any further on the status.
Hello Dan thank you for the comment, The fix to the error mentioned above was basically I reduced the number of DML inserts and also reducing the number of test records. This was the solution for the fix, which reduced the CPU time drastically. Even though one of the winter 14 releases was increasing the governor limits, CPU time would still limit us on governor limits. Please Comment, This is my understanding.
My understanding is that winter 14 eliminates code statement limits. However, I have not yet actually seen this in action, as the Winter 14 orgs I have still enforce the old code statement limits.
How to get rid of this issue ?
I have lot million of record to execute but I can not user batch apex because need to display real time data every time.
Well, you don’t get rid of this issue – you can’t do real-time data on millions of records in Force.com without batch Apex. You’ll have to do you processing on another platform such as Heroku.