Get Ready for Dreamforce 15

With just over a week left until the hordes descend on San Francisco, there’s a lot to prepare for. The big announcement for Dreamforce is going to be the new Lightning Experience. This new look of Salesforce is going to change everything! Rather than wait till you get to Dreamforce to begin learning about Lightning, I recommend taking a look at the new Trailhead modules. By finishing those modules, you’ll be better prepared and ready to jump into more advanced topics when you get to Dreamforce.

The Admin Trail – Migrating to Lightning Experience is best for existing admins who are interested in the impacts Lightning would have on your org. It walks you through the basics of Lightning. One of my favorite parts is the chart that compares the differences between Lightning and Classic. If you only do one thing before Dreamforce, look at this chart!

On the Developer side of things, the Developer Trail – Lightning Experience walks developers through Visualforce and Lightning Components. Unfortunately, since Lightning isn’t GA there aren’t the hands-on projects you get with a lot of other modules. One of the coolest things Salesforce is doing as part of the Lightning push is sharing design best practices. The module on the Lightning Design System is fantastic for getting an understanding of how to build pixel perfect components.

With these new Trailhead modules, you’ll be prepared for Dreamforce and ready to attend some awesome sessions!

If you are still looking for sessions (that don’t have anything to do with Lightning) to attend, here’s my shameless plug for the two sessions I’m presenting:

Writing Effective and Maintainable Validation Rules Tuesday, September 15, 12:00 – 12:40

Writing Effective and Maintainable Validation Rules Friday, September 18, 10:00 – 10:40

Apex Testing Best Practices Thursday, September 17, 4:30 – 5:10

Trailhead: The Awesome New Way to Learn Salesforce

I’ll admit it, I didn’t get Trailhead when it was launched at Dreamforce last year. There were only a few badges, and I didn’t really see what the big deal was. Fast forward a year, and holy cow! The Trailhead content just keeps getting added. I love the humor that the Trailhead Team injects into some of the modules. Catter, anyone? I also love how Trailhead actually makes you work in an org and checks your work.

As content in Trailhead increases, I’ve been pointing more and more people to Trailhead to learn Salesforce. Want to move into an admin role? Take the Admin – Beginner and the Admin – Intermediate trails. Want to see why I love developing on the Salesforce platform? Take the Developer – Beginner and Developer – Intermediate trails.

I’m sure we’ll be hearing a lot about Trailhead at Dreamforce next month.  Speaking of Dreamforce, there is a great badge on what to expect and what to do. This one has a lot of the humor I appreciate so much.

To get started in Trailhead, you need two Salesforce logins. The first login you need is your profile login. I use a developer org login for this. It is the same login I use for the forums and success community. The benefit here is that it isn’t tied to your production org login, so when you get a dream job at another company, your profile will move with you. The second login you need is also a developer org. This login will be used to complete your challenges. It is important to use a developer org for this login as you may need features not available in a sandbox or your production org. And why would you clutter a production org with all of that stuff anyway?

Once you have your logins squared away, you are ready to start learning. One of the cool things the Trailhead team is doing is providing modules on add-on products. One such example is Event Monitoring. By going through this module, you can gain an understanding of these new features and decide if it is something you think your production org would benefit from. You can be much better informed about the features without ever having to go through a sales pitch!

So, get on Trailhead and start exploring. Once you get a few badges, you’ll be hooked and want to collect them all!

 

Deploy Faster with Summer ’15 Features

Buried in the pages of the Summer ’15 release notes is a little feature that you might easily pass by. If you take advantage of this feature, then you can drastically speed up your deployments to production. This new feature allows you to choose which tests are run in a deployment.

First of all, the biggest change is that tests are only run when you are deploying Apex classes or triggers. This means that your deployments will immediately be sped up if you are only deploying objects, new fields, workflow rules, etc. If you do have Apex in your deployment package, then by default all local tests are run. Local tests are those that are not namespaced (included in a managed package).

The cool thing is that you can now make a change to your build.xml file to specify tests to be run. When you do this, only the test you specify in the build.xml file are run and Salesforce only looks at the code coverage from those tests on the code you are deploying. As long as the tests pass and the code coverage is good, then your deployment will succeed.

What does this mean in a real life situation? I ran a couple of scenarios and saw marked improvements. If I deploy a single class the old fashioned way, then tests in my org take about 30 minutes to complete. If I instead specify a specific test class, then my deployment time dropped to 2 minutes. This is huge, when you think about needing to deploy an emergency patch. You no longer need to wait for all tests to complete.

A word of caution: if you have the luxury of time, I highly recommend you run all local tests and use the quick deploy feature. This allows you to stage your release and make sure everything is working properly. Use the specific tests option only in cases where you need to get something to production fast.

Upgrade your Force.com Migration Toolkit and start getting faster deployments! Let me know how you plan on using this new feature.

Streamline Your Deployments

We had a release to production last night, and I marveled at how smoothly things went compared to the first release we did as a team. In this post, I’ll outline the critical components of building a successful release package.

1. Documentation

Before you even start work on features or bug fixes, have a system in place to document what is being asked for and how it will be used. We have a custom built set of objects in Salesforce to track this. Each feature or bug fix is assigned to a release and sprint and status is tracked as it moves through the process. We also add metadata as a related list to the features so we know what was changed. I’ve written some code that builds a package.xml based on all the items in the release. We use this package.xml file during our deployments.

2. Development Environment

Each person making changes that will go into the release should do their work in their own sandbox. By staying in your own sandbox, you have  no danger of overwriting someone else’s changes.

3. Version Control

Using version control ensures you can easily track changes back to a person and a feature request. We use a repository on Github that contains a complete snapshot of metadata. Each feature request goes in a branch that will get merged.

4. Continuous Integration

Before a feature gets merged into the release branch in Git, we test each one with CI. We use Snap-ci to do our CI. When a pull request is created, Snap does a validation build and runs all unit tests to make sure nothing is broken. If the change breaks the build, we know about it early and can fix it before it gets merged and sent on to users for testing.

5. Deployment Tool

All deployments are performed using the Ant Migration Tool. This allows for creation of a release package will all the metadata in it that we can deploy again and again to different sandboxes. We test the deploy at least three times before releasing to production.

6. Sandboxes

We use a lot of sandboxes when preparing for a release. Below are a few of the main sandboxes we use.

Staging – this is a config only sandbox used as the target of our CI process.  This sandbox is also used for user demos and basic testing by the development team.

UAT – this is a full sandbox that we use for user acceptance testing. We deploy from Staging to UAT about 2 weeks prior to the release to give users enough time to test and sign off on the changes. Important: if a user group doesn’t sign off on a change, we will pull their feature from the release.

Backup – this config only sandbox is created a day before the release and serves as a complete backup of the org’s metadata. It is easier to create a sandbox than doing a backup using Eclipse, Ant, or MavensMate. This sandbox is used in case we need to review how something worked prior to the release (aka, did that break with this release?). We can also restore quickly using change sets from that org if things go horribly wrong.

7. Quick Deploy

A recent feature Salesforce added is Quick Deploy. This allows you to do a validation deploy to production during which all tests are run. If it validates, you can click on the Quick Deploy to deploy the changes to production without having to run the tests again. The day of the release, we validate the package against prod and then wait for the designated release time to click the Quick Deploy button. This saves us so much time.

 

DataTables in Visualforce, part 2

In the last post, I showed how easy it is to turn a plain html table into a functional DataTable. In this post, I’ll show how you can build on that and and make the table even more dynamic. In this example, we’ll build a datatable that lists accounts that allows you to expand an account to see a list of contacts for the account.

First, instead of using a controller to return data for the table and build it using an <apex:repeat>, I am using Javascript Remote Objects to get 100 accounts. Note that the limit is 100 records at a time, so if you have lots of accounts, you may need to implement filtering or pagination (maybe a topic for a future post). The biggest benefit of Remote Objects  is that we don’t need a controller. We use the “ajax” option to specify the data for the table. The biggest issue I ran into with this was that it expects an object with a “data” property. You can’t just return the records from the remote object call.

In the columns option for the datatable, I have to specify where to pull each column from. This is done with { “data”: “_props.Phone”,
“defaultContent”: ” }. The defaultContent option tells the datatable to replace any undefined properties with a blank, preventing errors from being thrown.

Next, we have to handle the clicks to expand the row and display contacts. This code is fairly straightforward with another remote object query to get the list of contacts for the account. I then just build up a html table to append in the child section.

Here’s a screen shot of what it looks like when we are all done.

datatables ajax

And here is the entire page.

DataTables in Visualforce, part 1

I’ve been doing a lot of work with jQuery DataTables. They can add a nice custom interface for your Salesforce data with very little coding. I’m going to devote the next few posts on ways to use DataTables in your Visualforce pages. In this first post, we’ll make it pretty simple using a standard html table created using an <apex:repeat> tag and then using jQuery to make the table into a DataTable with zero configuration. Then we’ll do a couple of extra things to make our table a little more functional.

First, we need a controller that gives us the data we need. Our example will be listing contacts in Salesforce.

Now, here is a Visualforce page that turns that ugly table into a nice looking DataTable with sorting and filtering, plus some styling. Pretty easy, right?

Alright, let’s get a little fancy. We want to have a drop down listing all the accounts so we can filter by account name. We also want to default the order of the rows by contact last name. To do the sorting, we specify order with an array. The columns are zero-based, so last name which is the 3rd column will be index 2.

To add the drop down, we use jQuery to get the unique values from the first column, sort them and then append them to a select component on the page. We also add a handler to the select component to filter when the value changes. If “All” is selected, then it clears the filter.

And here’s what it looks like.datatables1

In the next post, I’ll show how to leverage Visualforce remoting to add some dynamic features to a DataTable.

 

Visualforce Dispatcher Revisited

Funny how things can work just fine for the longest time and then you get a head scratcher. I just ran across an issue with my Visualforce dispatcher class which allows you to control which page to take a user to based on the record type being created.

The problem was that a user had selected in the settings to bypass the record type selection page and always use their default. Turns out when a user does that, the record type Id isn’t passed to your dispatcher page as a parameter. A Google search turned up this Stack Exchange question and I was able to solve the issue by looping through the record types and inspecting them to see if they are the default for that user. If it is, then we use it to determine which page to send the user to. I’ve updated the original Gist so you can use the code.

Allowing any User to Edit Custom Settings

Custom settings are an awesome tool for the Force.com developer. They allow you to control code, formulas, workflow rules and more on the platform. The problem is that they are a little hard to update since they don’t come with a tab, page layouts or other features of a full fledged object.

In order to manage a custom setting through the interface Salesforce provides, you must grant the user “Customize Application” permission. Now, this permission shouldn’t be handed out to just anyone. It allows users to edit layouts, object, fields and more. In some cases, you may want to let your users edit a custom setting, but don’t want them to get the other power of Customize Application.

The solution to this is to create a Visualforce page that you can give access to your users. I merrily went down this route recently and it all worked well until I actually logged in as one of the users and got a nasty surprise when I went to save the record. An error appeared on the page:  “system.security.NoAccessException: Update access denied for SuperDuperSetting__c”. I tried all sorts of things including trying to set my controller to use “without sharing”, call a webservice method, and JavaScript Remoting. The only thing that worked was JavaScript Remoting, but it was a bit messy and I wasn’t happy with it.

I then started examining my debug logs and realized my save code wasn’t even executing. The error was happening somewhere upstream of the save method. This got me thinking about Visualforce and object binding. Sure enough, the problem was because I had bound my VF page directly to an instance of the custom setting. The VF page was allowing my user to view the setting, but when I did any method, including a cancel, it blocked it. It wasn’t letting me get to my code. I created a proxy class that mirrored my custom object and bound my VF page to it instead. Lo and behold my limited user was now able to save the custom setting.

Here’s my VF page which works with both controllers. Followed by the code that only works if the user has Customize Application permissions and then the code that works for everyone.

URL Hacking Cross Filter Reports

Warning: The Following Post is a Completely Unsupported Hack. Salesforce may make changes at any time that makes this not work.

You probably know how to pass report parameters using the url. This is helpful when you want to create links from a record in Salesforce to a related report. Cross filter reports can also be passed parameters, but it is a little more difficult to do than with a standard filter.

The first thing I did is create a report that uses a cross filter. In this example, we’ll look for accounts that have an opportunity with a close date after a certain date.

cross filter report

I then looked at the page source and found the relevant hidden fields we need to work with.

cross filter source

As you can see there are a lot of fields that need to be provided. Unfortunately you can’t set up the cross filter and just pass in the values like you can with standard filters. You need to pass in all filters and save the report without a cross filter.

Let’s examine each one to try and figure out what they mean.

ptable0 – This parameter corresponds to the main object in your report. I haven’t quite figured out what goes here, but it seems to be CUST_ID for custom objects and [OBJECT NAME]_ID for standard objects.

ocond0 – Use w for with and wo for without

rtable0 – This is the cross filtered object. It should be the object name for standard objects and the object ID for custom ones.

rtablecol0 – This is the column on the cross filtered object that corresponds to the parent object.  For standard objects, use the field name and and for custom objects, use the field ID.

sfpc0_0 – This is the field to filter on. For standard objects, use the field name and and for custom objects, use the field ID.

sfpn0_0 – This is the filter operator. It can be values such as eq, ne, gt, lt, ge, le.

sfpv0_0 – This is the value you want to filter on.

cross filter report and fields

 

Now that we have a better idea of what each one does, we can use them in a URL. So in this example, our URL will look like this

/[ReportID]?ptable0=ACCOUNT_ID&ocond0=w&rtable0=Opportunity&rtablecol0=Account&sfpc0_0=CloseDate&sfpn0_0=ge&sfpv0_0=1/1/2014

We can just change the date at the end to get it to populate a different date. Just save your report without any cross filters and then use the URL parameters to filter accordingly.

I’ve found that the best way to figure out all the parameters is to first create a report with the filter, run it and then view the page source. Just search for the list of hidden fields and look for the values that you should pass.

Good luck with your URL hacking!

Validation Rule Tips

When it comes to keeping your data in good shape, validation rules are a great clicks not code weapon to use in your arsenal. That said, validation rules can cause you a lot of grief if you don’t set them up with a few things in mind. Here are the things I’ve learned over the years with validation rules:

1. Use them as a last resort. Users don’t like trying to save a record and be told that a certain field is required. Where possible, use record types and page layout assignments to mark fields as required so the user sees the little red line indicating it is required. Dependent picklists are another option when one field is required when another is entered.

2. Make the error messages very clear to the user and think long and hard about where you want the error message to be displayed. It is best to display the error message on a field as it shows the user exactly where they need to make a change to save the record.

3. Try to keep the validation rule simple and focused on one problem. For example, let’s say on contact, you want to make a phone and alternate phone field required when a contact is marked as critical. Make two validation rules, one for each phone field and display the error message on the field.  Your users will see exact instructions on where to enter data to correct the problem.

4. Add an error code to your error message. As your org gets more and more complicated, you will be so glad you have an error code to track down a validation rule that is preventing records from being saved. For example, you might have a validation rule on a rollup summary field. This rule might prevent the user from saving an object and you will be completely lost looking for a validation rule on that object, instead of looking at the parent object instead. My error codes include the name of the object and a number. I also name my validation rules with that error code so I can find it quickly in setup.

5. Add a kill switch to the validation rule. Sometimes you may only want the validation rule to prevent saves for some people. In that case, I like to use a hierarchical custom setting so I can turn off validation rules for the entire org, specific profiles or specific users. In your validation rule formula, you can reference hierarchical settings using $Setup.Setting_Name__c.Field_Name__c.

So, putting it all together, here is what I think a good validation rule should look like:

validation rule example

Salesforce.com Tips and Tricks