A/B Testing is great for testing the finer aspects of a solution. It's a quantitative method and involves changing one thing (i.e call to action or button placement) and then measuring one result (i.e click through rate). Choose a single aspect to test and prepare the setting (an ad, website or onboarding sequence etc) and send half of your traffic to one option and half to the other. Measure the response to see how the different options are received by users.
As Ben outlines in his book Lean Analytics, A/B Testing is useful but also has a problem, to give your test meaning it's best to use significant traffic and it's tough to both run lots of back to back tests and have significant traffic volumes in a short period of time. To solve this problem he suggests using multivariate analysis which sees different traffic cohorts testing different aspects of your solution. There are a number of tools online that can help you with this type of analysis, see the supplies and resources section.
WHEN: A/B testing is best used when you're far enough along to be testing smaller and more specific assumptions, the big product and solution assumptions have been validated and now there are more focused questions to answer.
WHY: When tied to specific metrics A/B testing can give clear yes/no indications of what is working best making decision making more straightforward.