Advanced Apex 2nd Edition Available!

I’m pleased to announce the immediate availability of the second edition of Advanced Apex Programming for Saleforce.com and Force.com

A few months ago, when SFDC announced the elimination of script limits, I knew that it had finally happened – a change that really impacted some of the content of the book. That led to some major changes in chapter three. And I figured, as long as I’m working on the book anyway; why not add a few more changes?

Chapter 6 extends the discussion on triggers to clarify some points based on questions I’ve received over the past year.

Chapter 7 has significant new content on batch apex and scheduled apex asynchronous patterns

Chapter 8 is a new chapter on concurrency issues (the later chapters have been renumbered).

Plus, there are numerous other smaller changes and additions scattered throughout the book.

All told, the book has grown by about 50 pages.

It also has a snazzy new cover – making it easy to determine going forward which edition you’re looking at.

Also, unlike last year, I’m pleased to announce that the Kindle and Nook editions are also available for those of you who prefer the eBook format.

The book available now on several Amazon.com country sites, and I’ll be linking the others as they go live. The links on the left will take you to the new edition – it will take a few weeks before all of the channel databases are updated.

Dreamforce 2013 Sessions

I’ve been so busy for the past month that I haven’t had much time to post, but I’m pleased to say that I’ll be presenting three sessions at Dreamforce this year.

Monday at 11:15am, Moscone West -2009, High Reliability DML and Concurrency Design Patterns for Apex

It’s remarkable when you think about it, that even though Force.com is a highly scalable multi-user and multithreaded system, there is hardly any documentation on how to deal with Apex concurrency issues. I’m looking forward to shining some more light on this topic and sharing some of my own adventures (and misadventures).

Monday at 1:30pm, Hilton San Francisco Union Square – Community Success Zone Theater, Apex Design Patterns for Managed Packages

This one is for the ISV’s in the community, particularly the developers. Those of us who create managed packages are a growing minority – it’s nice to see us getting some more attention this year!

Tuesday at 5:15pm, Moscone West – 2024, Design Patterns for Asynchronous Apex

At first I was thinking – 5:15pm before the gala? Talk about bad timing. But then again, talking about timing (good and bad) is a large part of asynchronous apex, and if the late hour gives you a syncing feeling, so much the better 🙂

I hope to see many of you there. Also, be sure to attend the developer keynote on Wednesday 10:30 at Moscone South – Gateway.

And, I encourage you to visit this site sometime this weekend for another post that you may find of interest.

 

Goodbye Script Limits, Hello what?

Perhaps the most surprising change for Winter ’14 is the elimination of script limits, to be replaced with a single CPU time limit for each transaction.

This is an extraordinary change, and it’s worth taking a few minutes to explore the consequence, both long term and short term, of this decision. Keep in mind, that what follows are my preliminary thoughts – I’m still somewhat in a state of shock 🙂

In the immediate future, I don’t expect this change to have any impact. I believe SFDC when they say they’ve analyzed the situation and that no current code will exceed the CPU time limits.

To understand the long term impacts, let’s consider the real meaning of this change.

  • First, managed packages will no longer have their own set of script limits, or their own CPU time – CPU time will be shared among all managed packages interacting with a transaction and code native to the organization.
  • Second, my understanding is that time spent within Salesforce code counts as CPU time. Up until now, script limits only impact your code – a long running built-in operation such as a sort or RegEx would only count as a single script line.

This will obviously have an immediate impact on how one might code for efficiency. Your code can be more verbose – there will be less need to build complex conditional statements that are barely readable in order to cram everything into one line of code. Not having to trade-off readability for efficiency will be very nice.

For the first time Apex developers will need to care about the efficiency of the built-in Apex class code. This will be a whole new topic for discussion, as the community gradually discovers which classes and methods are good, and which should be avoided, and when.

The real question comes down to what happens going forward – say, six to twelve months from now. Without the script limits, the pressure to optimize code will be reduced, and I’m sure we’ll see code appear on orgs that would never have survived in the current system.

As an ISV partner, this brings up an interesting question. What happens when some of that bad code, either on an org or in another package, uses up most of the CPU time, and when it becomes time for my package to run, limits are exceeded? Running a debug log with profile information should presumably allow identification of the greedy piece of code, but how many sys admins will take the time or trouble to actually figure this out? It’s so much easier to blame a package – possibly the one unfortunate enough to have tipped the CPU limit.  As this occurs more and more often, one can envision a case where customers gradually lose trust in applications in general, never knowing if one can be safely run. Ultimately this could impact trust in the platform overall.

Arguments that the proposed CPU time limits are generous (and they are are), don’t (so far) address the well known fact that software inevitably expands to use available CPU time (often because it’s expensive to optimize code and therefore often not done unless it’s necessary).

There seem to me three possibilities going forward.

  1. There is a real commitment within SFDC to build infrastructure to support inefficient code, so the performance will increase faster than the spread of inefficient code. (And don’t try to convince me that people won’t write inefficient code ).
  2. The amount of headroom in the current CPU limits really is so great that it pretty much takes an infinite loop to exceed it. (I’m sure I won’t be the only one experimenting with this in days and weeks to come).
  3. The engineers who made this choice are deluding themselves that all Apex developers will continue to write efficient code even when they don’t have to.

As an ISV partner who ships a very large application, I confess that the relaxed script limits are definitely going to make life easier. At the same time, I really hope that when CPU time limits are exceeded, they don’t just post an error blaming the application that tripped the limit, but rather more detailed information that explains to users where the CPU time went – so that it is easy for clients and vendors alike to quickly focus on the code or package that deserves the blame.

A Most Interesting Apex Trigger Framework

In my book Advanced Apex Programming, I spend quite a bit of time discussing trigger design patterns. But I’m going to let you in on a little secret – what you find in the book isn’t really a “design pattern”, so much as a design concept.

And despite the chapter name “One trigger to rule them all”, I didn’t originate the idea that it was a good idea to control execution sequence by using just one trigger – experienced Apex developers already knew this. What I think I brought to the table was the idea that we could take advantage of the Apex language object oriented features to implement that concept in some really good, supportable and reliable ways.

Here’s a secret – the examples I used in the book do not, in fact, accurately reflect the framework I used in our own products. The framework we use is considerably more sophisticated. But the examples do reflect the concepts that our framework uses.

I did this because I do not believe there is any one “right” trigger design pattern or framework for everyone and every situation. So my goal in the book was to demonstrate the concepts involved, in the hope that others would build on it – come up with variations of different design patterns and frameworks based on those concepts.

I was thrilled to see the other day a blog post by Hari Krishnan called “An architecture framework to handle triggers in the Force.com platform”. It’s beautiful piece of work (and I do appreciate the shout out). As with our own framework, I don’t think it’s a solution for every scenario, but it does present a very elegant object oriented implementation to the problem. What really struck me was the innovative use of dynamic typing to instantiate objects based on the object type and name. Our own framework doesn’t use that approach, for the obvious reason that it was built before Apex supported dynamic object creation by type, but it’s definitely worth considering for any design going forward.

I don’t know if Hari has worked on the .NET platform (he does mention Java and C#), but the idea of dispatching by name is one we’ve seen in a number of Microsoft frameworks and languages. One can’t help but wonder if, now that we have a real tooling API, someone might come up with a client tool to generate and manage trigger handlers based on a framework like this….

Not only might this automate some of the “plumbing”, but conceivably bring us to that state of Nirvana where, with judicious use of some global interfaces, we might be able to control order of trigger execution across cooperating packages and between packages and Apex code on an organization instance.

Ah well, one can dream. Meanwhile, kudos to Hari for a fine piece of work. Definitely worth a read.

Code Coverage and Functional Testing for Optional Salesforce Features

A couple of days ago Matt Lacey posted an excellent article on developing for optional Salesforce features. He ended it with a question – how do you ensure code coverage for those orgs that have those features disabled?

For example – let’s say you have code that only runs when multi-currency is enabled on an org:

if(Schema.SObjectType.Opportunity.fields.GetMap().Get('CurrencyIsoCode') != null)
{
    // Do this on multi-currency orgs
    obj.Put('CurrencyIsoCode', o.Get('CurrencyIsoCode'));
}

How do you get code coverage for this section?

One way to do this is as follows:

First, we refactor out the currency test into it’s own function as follows:

private static Boolean m_IsMultiCurrency = null;
public static Boolean IsMultiCurrencyOrg()
{
    if(m_IsMultiCurrency!=null) return m_IsMultiCurrency;
    m_IsMultiCurrency = Schema.SObjectType.Opportunity.fields.GetMap().Get('CurrencyIsoCode') != null;
    return m_IsMultiCurrency;
}

Though not necessary for this example, in any real application where you have lots of tests for whether it’s a multi-currency org, you may be calling this test fairly often, and each call to Schema.SObjectType.Opportunity.fields.GetMap().Get(‘CurrencyIsoCode’) counts against your limit of 100 Describe calls. This function (which is written to minimize script lines even if called frequently) is a good tradeoff of script lines to reduce Describe calls for most applications.

Next, add a static variable to your application’s class called TestMode

public static Boolean TestMode = false;

Now the code block that runs on multicurrency orgs can look like this:

if(TestMode || IsMultiCurrencyOrg)
{
   // Do this on multi-currency orgs
   String ISOField = (TestMode && !IsMultiCurrencyOrg())?
                     'FakeIsoCode' : 'CurrencyIsoCode';
   obj.Put(ISOField, o.Get(ISOField));
}

What we’ve effectively done here is allow that block of code to also run when a special TestMode static variable is set. And instead of using the CurrencyIsoCode field which would fail on non-multicurrency orgs, we substitute in any dummy Boolean field. This can be another field on the object that you define, or you can just reuse some existing field that isn’t important for the test. There may be other changes you need to avoid errors in the code, but liberal use of the TestMode variable can help you maximize the code that runs during the test.

Why use a TestMode variable instead of Test.IsRunningTest()? Because the goal here is to get at least one pass through the code, probably in one specialized unit test. You probably won’t want this code to run in every unit test.

With this approach you can achieve both code coverage and, with clever choice of fields and field initialization, functional test results, even on orgs where a feature is disabled.