12 posts categorized " Data and Analytics"


Personifest 2019 Presentation: "3 Professional Wild Apricot Magic Tricks"

You are convinced that Wild Apricot is a great fit for your organization. You got all the basics setup. But there are lingering thoughts — will it really do everything you need? Culled from several years of experience deploying Wild Apricot for organizations around the world, this presentation will illustrate the true transformative power of Wild Apricot.

NewPath Consulting will share three tricks: Website Design with CSS, Multi-chapter Member Directory Magic, and Professional Reports and Analytics.


For registered participants at Personifest 2019 a video of our talk along with slides is now available! All of the Personifest 2019 talks are available for free with a quick registration. Use the registration code 'personifest2019'.


11 Steps to Membership Management Success - Planning Your Database I - Managing Membership Levels and Bundles

Step 2 - Membership Levels and Bundles
Step 2 - Membership Levels and Bundles

In Step 1 of “11 Steps to Membership Management Success,” you read about planning and managing your membership database. In Step 2, you’ll learn about preparing your data for migration to Wild Apricot: managing memberships grouped around common elements.

One of the most important features of any membership management system is configuring membership levels (or tiers) and renewal policies. Membership management can be a complex topic, with several things to consider. We are going to try to make this topic easy to understand.

What are your membership levels and renewal policies?

A membership level is tied to one or more of the benefits your organization offers, and membership levels can be free or have a cost. Membership levels also have an optional renewal policy that specifies when members will have the option  to renew. As you assess your renewal policy, ask these questions:

  • How are your membership levels currently advertised, and how do you wish to implement these features into Wild Apricot?
  • What will your membership renewal policy be -- monthly, yearly, quarterly, or no renewal at all?
  • Do you want to take advantage of offering paid options or collect specific information for each member who signs up or renews at a particular level?

Do you offer membership bundles (such as a family or group plan)?

Membership levels can be configured as “individual” or “bundle” membership levels. The type of membership level will identify whether an individual can only renew their own membership or if they can renew on behalf of a group of others through membership bundle. All of this is easily configurable when setting up membership levels.

A membership bundle is a collection of members who are linked together under an “umbrella” membership managed by a bundle administrator — the lead member of the bundle.

Your organization can use membership bundles to offer group memberships to companies, teams or families. A membership bundle enables a group to renew together, rather than requiring each individual to sign up or renew on their own. Each bundle member receives the same access to benefits as the bundle administrator.

Read more about membership bundles from Wild Apricot


Priorities for a membership bundle

Members in a membership bundle share several important attributes:

  • a common renewal date
  • a common membership status
  • a common membership level

The entire bundle is charged a single membership fee, paid by the bundle administrator. That’s why identifying the bundle administrator is critical.

If you plan to use membership bundles, your database must be set up to support the “bundling” process during the database import. For example:

  • All members in a bundle must have a database field unifying them together into the bundle: in Wild Apricot, this is the User ID or email address of the bundle administrator.
  • You will need to decide whether you will bundle using the email address of a bundle administrator or whether you will use the unique number (aka User ID) of the bundle administrator.
  • To create the membership bundle, you will need to assign the  bundle key of the bundle administrator (email address or User ID) to each member of the membership bundle in the import file..


An simple import file will illustrate this process. In this file we are designating Sarah as the bundle administrator, using Sarah’s email address as the bundling mechanism.


Email Address

First Name

Last Name

Member Bundle ID or Email



Bundle Admin




Bundle Member 1




Bundle Member 2



Note that each membership level can be designated with a maximum or unlimited number of bundle members. This business rule will be enforced during the contact import process or when you are adding or removing bundle members.


Critical input to the planning process

In one popular CRM system, CiviCRM, the bundling mechanism is called a “household” and can be exported as part of the membership record. Other systems may have some other indicator of a membership bundle. In some systems you may have to create the membership bundle manually if there was never a bundling mechanism in place.

This can be a complex, iterative process — in fact, it’s usually one of the most complex and somewhat abstract parts of getting started with Wild Apricot.

We can help you navigate this key planning process — Contact NewPath Consulting for a free consultation.

In the next chapter in this series, you’ll look at some final considerations in preparing your data for migration to the Wild Apricot platform: Step 3 – Tips and Prep of Your Master Import File.


11 Steps to Membership Management Success - Planning Your Database I

Staff sitting around a table deciding on database
Step 1 - Create a single source of the truth

Step 1

Now that you’ve gotten set up and familiar with Wild Apricot in Step 0, you’re ready to start planning how you’ll set up and manage your membership database.

In these next three articles in this series, you will complete Step 1, which will help you:

  • understand how and where your information about contacts and members is organized
  • understand membership bundles and when they can be used
  • make decisions about archiving and cleaning up your data

Your database plan will help you create your master import file, which will contain all the information you will be adding or migrating into your membership management system. The database plan and related considerations are key to the process of defining your database and help work toward full membership management system implementation.

We’ve broken Step 1 into three parts, to give you time to think through and carry out this critical first step in the process.

Let’s start by understanding what your organization’s data looks like now. 

What’s the current state of your organization’s database?

From the wide range of organizations we’ve worked with at NewPath Consulting, we have found a small to medium-sized organization’s data is usually “organic” in nature. That’s a nice way of saying it has evolved and passed through many hands. It is also possibly quite disorganized, potentially lacking important pieces of information. In short, it’s probably useful for people—but not for computers.

So answering the question about what your database looks like now involves doing some digging and housekeeping.

  • Do you know where all of your data about members and contacts resides? Can you easily collect, understand and use this information?
  • Do you already have your data stored in a database system, or is it spread across multiple systems or storage methods?

Sometimes when volunteers manage a database, they make it useful for themselves, but not easily accessible for your members or your systems. When volunteers change, the integrity and accuracy of your database may suffer.

If you do use a system, is it:

  • a spreadsheet?
  • a commercial or open source database like CiviCRM or Salesforce.com?
  • one or more custom, legacy databases?

Even if your contact management system comprises a drawer full business cards and sticky notes, the starting point is the same – establishing what you have now, and deciding which fields you’ll want to use in your new set-up. You will also need to establish the “business rules” around certain fields in terms of what is allowed in those fields.

If your information is coming from different places or systems, you want to anticipate the issues that might arise with exporting or manipulating the data. Knowing what you want to keep, what each field means and what data needs to change to conform to Wild Apricot will help you manage this process more seamlessly.

Think of it as paving a smooth and manageable path to creating one "source of the truth" where everyone – your contacts and members, your stakeholders, your staff and volunteers – will be fully and accurately described.

Do you know how to export data from your current systems, if necessary? If not, do you have the documentation or access to the people who can help get access to your data?

NewPath Consulting can help with this part of the process. We have experience with all sorts of complicated and “messy” systems.

The steps and estimates in this series are based on our experience migrating diverse customer groups to Wild Apricot. We believe these steps have an intrinsic order, so it’s important that you follow the steps in the order we’ve described in this series. If you can’t finish one before we publish the next, you can always come back to it when ready.

Watch for the next chapter in this series, in which you’ll start the second step in preparing your data for migration to the Wild Apricot platform: Step 2 – Managing Membership Bundles.

Do you have questions about Wild Apricot or managing your member and contact database? Contact NewPath Consulting for a free consultation.


Data Pipelines to the Rescue: Building modern analytics platforms

This article was originally published on the ChiefMartec blog. Alex Sirota from NewPath Consulting wrote the article as an entry into The Hackies essay contest for the Spring 2017 MarTech conference in San Francisco. It has been slightly updated with new information available as of October 2017 including new research on new data pipeline vendors.


In Out of the Crisis, page 121, Dr. Edward Deming states:

The most important figures that one needs for management are unknown or unknowable, but successful management must nevertheless take account of them.

Deming realized that many important things, that must be managed, cannot be measured. You can’t measure everything important to management, and yet you must still manage those important things. But figuring out what is important to measure, and effectively doing so is as painful today as ever. Effective digital transformation depends on measuring the things important to your customers, not to just your executive management and staff.

Fundamentally changing a business strategy requires some difficult and controversial choices. How can we make effective decisions in a world of constant noise and disruption? We need to observe and measure what customers and organization staff actually do, not what their biases dictate.

Data Access is Sacrosanct

Today, most key business performance data is stored in structured and unstructured formats on internally-managed infrastructure. The most important decisions are made using information stored in spreadsheets, presentations and in various proprietary data formats designed to keep data secure and inaccessible by most staff.

Data access is sacrosanct and business users have to go through a set of data priests guarding information fiefdoms to get it. If you want to see an integrated view of all business metrics, you have to build a “Key Performance Indicators (KPI) Dashboard.”

Been There, Done That: Excel Dashboard Hell

The creation of the KPI dashboard involves blessing from IT and buy-in from the business owners of the data. All are involved in a massive project to export, transform and load data (ETL) into a data warehouse.

These projects are expensive, and only the companies with large resources can undertake this strategic but critical work. Production enterprise resource planning (ERP) systems, marketing and sales systems (e.g. CRM), financial and accounting systems, and customer service systems feed a data warehouse that theoretically will provide a complete picture of overall business health.

The dashboard, usually fed from other underlying intermediate data sources, informs the CEO accountable to a “bottom line” and controlled by the CFO who reports financial results. Downstream individual lines of business use more granular dashboards to understand various revenue and cost drivers to respond to pressures and take advantage of opportunities.

When successful, these projects can predict when a business model is working and provide insight to strategic decisions.

Most of the time though, these projects are departmental, designed by sales, marketing or operational teams. To collect the data across an enterprise, spreadsheets become the lingua franca, usually out of date, maintained by business owners and their Excel-minions. The data in a spreadsheet is usually laced with bias from the people who supply the data. Or worse, they are laden with inaccuracies or incompleteness due to the various transformations and inadequacies of the underlying data warehouse.

Cloud-enabled Microservices Transform Business Intelligence

As business applications move into the cloud, so too does the data that needs to be analyzed. But with the disaggregation of IT services around business capabilities into hundreds or even thousands of cloud “microservices” what does the data challenge begin to look like?

It looks absolutely terrifying and chaotic.

image from cdn.chiefmartec.com

The “Era of the Cloud” (diagram from Matt Miller of Sequoia Capital) has a critical feature that makes sense of the potential chaos: each service has an underlying data model that is abstracted by an easily accessible API.

Because APIs are a critical component of integration between cloud services, they are also a ready made data tap for a modern cloud-enabled data warehouse. In fact the quality and stability of the API, and the ability to get direct access to transactional, non-summarized data is a differentiation point when selecting cloud services.

A new type of data is commonly also available — “meta data” that measures every transaction whether it be an anonymous click on a website or even the usage and telemetry data from an application user interface. Cloud services collect an extraordinary amount of data, much of it collected as “exhaust” data and recorded by computers rather than forgotten by people. This new type of data, stored for of pennies per terabyte, can point to performance indicators that have never been available before.

Answering Two Holy Grail Business Questions

  • How do you identify your most profitable customers and segments?
  • How do you attract more customers like them?

These two questions sum up the “bottom line” of most business models. Yet many businesses, large or small, are not positioned to answer them. They rarely have the necessary analytics in place to inform a decision maker whether or not their current business strategy is working or where improvements are in order.

Until recently, answering these type of questions could be prohibitively expensive and maybe even impossible. But keep these two questions in mind as we build an enterprise running entirely on API-enabled cloud services using a “data pipeline” and pose a set of underlying intermediate questions than can inform the answers to our two core questions.

  1. How much revenue do we generate by customer segment? How much cost do we have by each customer segment? How quickly can we analyze different segments of our customers? (Source: financial + CRM)
  2. What is our advertising ROI by channel? (Source: web and ad analytics, including external vendor)
  3. Are our customer success efforts affecting our churn rate? Can we optimize our customer support efforts? (Source: customer support + web application analytics)
  4. Are our sales people talking to the right target customer? Are there other prospects we are missing? (Source: CRM + marketing)
  5. How effective are our staff at innovation? How quickly can staff ramp up to be effective after hiring? How are they learning on the job? (Source: is there a system for this at all in organizations?)

We will need to centralize these data sources so that our business intelligence tool can draw from one aggregate pool of data that’s consistently updated. We will also need to automatically maintain a data dictionary and identify common identifiers shared between systems (e.g., an email address to identify a customer between pre- and post-sales systems).

Cloud Enabled Business Intelligence State of the Art

At NewPath Consulting we have researched the creation of cloud-based business dashboards since 2014. Cloud-based systems each have their own data silos, as illustrated below:

image from cdn.chiefmartec.com

Each cloud-based system has its own way of doing data analysis, usually very limited. In many ways, the cloud-based data problem is a lot more complex, because the proliferation of data silos is even more intense in the cloud than it was when data resided in several, on-premise proprietary systems and documents repositories.

The saving grace to the cloud data problem is two-fold:

  1. All cloud applications are designed with a REST API, that allows the programmatic, real-time extraction of data.
  2. A new breed of data pipeline and visualization companies have developed highly effective tools democratizing data access and analysis.

Creating the Cloud-Enabled Data Warehouse

The following graphic from a Looker and RJ Metrics presentation illustrates several technical challenges of data integration common to all cloud data warehousing projects:

image from cdn.chiefmartec.com

Web traffic, distribution and online goal data can be managed in Google Analytics for example, but what about transactional data? How do you manage the data about the journey from an anonymous prospect to a long-term, profitable customer? And how could a business determine if a customer is profitable if measures of profitability have to come from multiple systems?

This is the core business problem in any business intelligence system — creating the necessary underlying relationships between various data sets.

At NewPath Consulting, customer profitability is dictated by:

  1. Cost of customer acquisition
  2. Number of customer service requests
  3. Number and length of longer term implementation or maintenance projects
  4. Underlying software and human resources needed to service those requests

The data to support these indicators post-sales can be tied together between systems easily: the email address(es) that the customer uses across multiple operational cloud services. In a pre-sales scenario we must count in aggregate, but customers decloak from anonymity with a direct interaction by phone or email or some other traceable identifier such as a web cookie or customer ID.

Data Pipelines to the Rescue

New tools such as StitchAloomaXplenty, ETLeap and Fivetran and even open-source solutions allow real-time “copy and paste” and “synchronize” of multiple cloud data sources into a cloud data warehouse, so it can be manipulated and visualized in many ways. Tools like Alooma and Xplenty enable data transformation services as well. Since writing this article, Stitch has launched an open-source project called Singer that enables the creation of "data taps" for many SaaS platforms. This well-written and researched August 2017 article from Mode Analytics details differences and evaluation criteria for data pipeline tools.

Visualization tools like Looker coupled with data pipelines make the building of a data warehouse a snap. Even G Suite add-ons such as Supermetrics can populate a Google Sheet to create a DIY cloud business intelligence system for SMBs.

NewPath Consulting has started to put together a content marketing dashboard using Google Analytics data to analyze daily, monthly and yearly trends in our content marketing efforts. We have also integrated our billing system into a steady stream of expense and customer payment data.

It is a satisfying experience to spin up a fully configured data warehouse in the cloud with a few clicks and in minutes start pumping raw data from multiple cloud sources manipulated through tried and true object/relational techniques.

Here’s an illustration of how these new data pipeline tools enable a totally new degree of analysis composed of a myriad of cloud-based data sources:

image from cdn.chiefmartec.com

In addition to providing fantastic visualization and reporting functionality, modern business intelligence tools often have a modeling layer as well that allows users to perform joins/transformations as needed.

Unlike in traditional ETL (extract, transform and load) systems, transformation is performed after the loading step (i.e., ELT — extract, load, then transform). The benefit is that the end-users — the ones who primarily work with the data — have a lot more power and access to raw data, and don’t have to depend on IT to accomplish the analysis they need.

Four Business Intelligence Futures

So where do we go from here? We believe there are four innovations on the horizon for businesses and the people that operate them:

Data citizens, unite. The democratization of cloud-based raw data, delivered through the immense compute and storage capability of cloud data warehouses like AWS Redshift will open up access to users beyond data scientists. Business users will become data citizens and end the reign of data priests and information fiefdoms.

Unlimited access to raw, unfiltered customer-oriented data. The transformation from an ETL to an ELT model will enable direct access not only to summarized data (e.g., Google Analytics dimensions and metrics) but also to customer behavioral and transactional data as well as exhaust “meta data.”

Business model canvas-on-steroids. The metrics to measure each part of a business model can be evidence based (rather than assumptive or intuitive), supported with key performance indicators, summarized in real-time from hundreds of disparate cloud based data sources.

The cloud-enabled business advisor. Imagine the ultimate solution to the CEO’s job of making better business decisions: a cloud-enabled, AI-powered advisor. When will an intelligent, virtual business advisor begin to whisper into the CEO’s ear with recommendations around potential acquisitions, resource reallocations, pricing adjustments, and structural reorganizations?

If you want to be kept up to date on this project, please let us know, and we’ll put you on a notification list.


How Effective Use of Analytics Increases Your Traffic - A Scientific Experiment Part II

As small business owners we have limited resources and we must make them count. That includes the time, money and effort put into creating content.

In a previous post we introduced our Average Pageviews Per Month spreadsheet report, which measures how well your content performs. The more pageviews per month over time, the more that content is resonating with your audience.

In this blog post, we will document how to configure and use the Average Pageviews per Month metric and a few other interesting reports we will build.

One of our most powerful reports is called “All Posts.” It automatically generates a snapshot of all traffic to your blog, from the beginning of when you started reporting to the current day (=today). You can then use formulas in your spreadsheet to control what data is displayed, and how.

Google Analytics has its limits

Why do this reporting outside of Google Analytics, when it is such a robust platform? When you go to Google Analytics your starting points are almost always different - the date/time parameters and other elements of your dashboard are always set to  what you looked at last. If you’re always resetting parameters, you can lose track. It’s like you’re always starting from scratch; you’re getting a scattered point of view and it’s hard to focus on a few metrics and measure how things are changing.

Filtering on certain parameters is also not intuitive and difficult to do on the fly while doing analysis. The filter parameters have to be set up each and every time you run a report, or you must set up “segments” - an advanced feature of Google Analytics.

A customized Google spreadsheet, like the one we showed you to set up in the last post, is always reflective of the same date parameters and filters. You get what you expect - every time. Plus most small business owners are already comfortable with reading a spreadsheet, while Google Analytics reports can take some getting used to. Also, when you use Google Analytics without a filter, you get all web pages and blog posts, instead of focusing on just the blog posts, which is what we’re trying to do here.

Another bonus of spreadsheet reports is that if you want to make changes to the data, you can always add new reports and run them using a new report configuration. Since all reports can automatically update every day, this is a powerful alternative to the Google Analytics system, configured exactly how you want.

A closer look at the All Posts report

Here are the metrics we want to capture in the All Posts report:

  • Pageviews (number of pageviews of that post)
  • Unique pageviews (factors out if a person loaded the page more than once; doesn’t count the multiple page views during the same session)
  • Average time on page (amount of time reading before clicking away)
  • Entrances (# of times someone actually started their web visit on that page)
  • Bounce rate (# of times started at that page but didn’t continue to any other pages on the site; the article didn’t engage them)
  • Exit rate (# of times this was the last page someone visited before leaving the site)

In Part One of this post we introduced the pattern matching language known as regular expression (also known as a regex). Here we’ll use regex again to filter for these dimensions:

  • Sort by descending order on number of pageviews using the directive  -GA;pageviews
  • Page path - the text that appears after the domain name of your site, i.e., /myblogpost.html

The page path filter includes the regex code that identifies  which of your website pages are actually blog posts. For example, the blog posts on our website uses a URL pattern of /YYYY/MM/blog-postname-html, so our regex will filter all page URLs that start with with /YYYY/MM. You can look at the URL of any of your blog posts to find out how your blog’s URLs (ie “permalinks”) are structured.

Common permalink patterns

The two most common WordPress permalink patterns are nicknamed “ugly” and “pretty” (click here for more information about permalink structures). There are several other possibilities as well, which is why this filter will be fairly unique to every website.

For the purposes of our All Posts report, the most important elements to have in your permalink are the name of the blog post, and the year and month it was published. If that’s not your situation, get in touch about a function that allows you to grab metadata such as the publication date from the blog post itself.

This filter will include URLs that have year and month, and exclude ones that have the week in them, or other noise data and index pages that aren’t actually blog posts.

Now that we’ve finished that, we can run a report and populate a separate tab of our spreadsheet of raw data called All Posts report.

Here is a Google Spreadsheet template you can use to try this yourself. In Google Sheets you can also make a graph of your data very quickly, e.g., all articles with higher average pageviews than 1.


In the Google Sheets template, you’ll see that these two key columns are highlighted: D - Post Age Months, and E - Average Pageviews Per Month

Assuming that your permalink format is the one we talked about, we’ll use this Google spreadsheet formula:

The age of the post in months is calculated with

This formula extracts and calculates the age of the post in months.

The average pageviews per month formula depends on the age of the posts in months and is:

So when you’re done, you get these two handy columns of data, and once you’ve filled down the remaining rows, you can sort on the data. Use the menu option Data-->Sort Range-->Data has a header row, and sort by average pageviews in descending order.

Once you do that, your most popular blog posts that are consistently getting traffic every month will float to the top. Now you can see which articles are generating traffic to your site and resonating with your audience consistently, and which ones are not.

The older articles that consistently have a large amount of page views will probably continue to drive traffic to your blog and the rest of your website. People are interested in what you’re talking about; the chart above follows a long tail. The goal is to push more articles to the head of the long tail, thus increasing your overall traffic while resonating with your target audience.

Near the bottom of the report you’ll see the posts that get less than one page view a month on average. The likelihood that anyone will read those articles is low. They may continue to drive traffic to your site, but there is no point is doing anything to promote them.

The Netflix phenomenon

These articles may be well-written and interesting, but they’re just not hitting the mark. In fact, you’ll probably find that if your traffic to the blog follows a long tail, you have a small set of articles that are generating any real traffic, and a large number that are generating none at all.

And it should be obvious by now that as long as you generate more articles you will continue to build traffic, but once you stop it is unlikely traffic to your site will increase. Here’s an automatically updating chart of the traffic to this blog. You’ll notice the marked increase in September 2016 over September 2015 as we continue to write new articles and actively promote the existing popular blog articles in social media and through Google:

NPC traffic sep 2016

Click to see a full image of updated monthly blog traffic.

You can liken this to what happens on Netflix, the popular movie and TV streaming platform, where only a small number of videos out of their huge catalog of content are viewed the most. No one would suggest that Netflix - or you - delete the rest of the content that doesn’t rank, but that you focus on investing your resources - especially if limited - to create and promote popular, high-quality content.

You can’t just rattle 500 words off the top of your head and throw it up on your website with no editing, contemplation, or understanding of your audience. Google Analytics and customized reports like the All Posts spreadsheet give you all the feedback you need about whether you’re going in the right direction with your content or not. Because if you want your overall traffic to increase, your blog traffic must increase. That’s the power of blogging, if you’re doing it right.


How Effective Use of Analytics Increases Your Traffic - A Scientific Experiment

Small business owners know they should create content, but often produce that content willy-nilly, with no plan in mind. They suspect there is some science behind search engine optimization, but don’t know how to balance that with the artistry of writing.

Why people come to your site

A website generates traffic when its environment changes, i.e. new blog posts are added that people discover and want to read, or new inbound links are created that lead from other sites to yours, such as social media networks, or your referral partner websites (inbound link data is found in your Google Analytics Acquisition reports).

If you stop changing the environment (adding new blog posts, also known as content marketing), your traffic will level off or decline. We saw a stark example of that on our own site.

We blogged at least 1-2 times per month in 2014, and even hired writers to help us formulate our ideas. In 2015 we stopped doing that in favour of timed events to more opportunistic articles (webinars, for example) and while that generated traffic, it never really created the steady traffic that connects with long tail keywords (the highly specific search terms that potential customers actually use). We also didn't get as many inbound links as a result.

Sometimes you can get lucky when there is flurry of new interest in a topic you’ve previously written about, and that can bring new traffic to those posts. However, the best way to get new eyes on old content - whether previously published blog posts or static pages like About, Events or Services - is to add new content such as blog posts. That’s because when people visit your site to read the new content, they’ll most likely also click on other pages.

Likewise, if you stop adding new content to your site, you’ll not only lose the traffic you would have gained to those new pages, but your existing pages won’t get the secondary exposure they would have enjoyed.

Before you can analyze your content’s sustainability and increase your site traffic, you need some baseline information, so the sooner you set up Google Analytics to start collecting data, the better. You want to understand who your audience is and what information they’re searching for.

You may think you know the answer, but the data may surprise you. In Google Analytics you’ll find this information in the Audience and Behaviour reports (if you don’t have Google Analytics set up on your site, follow these steps or get in touch and we can show you how to use this free resource!).

Behaviour reports tell you what people do when they visit your site - the first page they land on, the last page the view before leave leave, and everything they do in between. That also includes the words and phrases they type into your site’s search box, i.e., what content they’re looking for on your website.

You’ll want to look at Behavior reports for your site’s most popular pages and most popular blog posts, so you can compare these results with the data for the new content you’re going to create, and how that new content affects views of your existing pages.


The Average Page Views Per Month report - Making Google Analytics work for you

Google Analytics produces an extraordinary amount of data, which can be intimidating. It may look and seem complicated, but it’s simply a set of dimensions and metrics that can be portrayed any way you wish. Here is a full list of what can be measured by Google Analytics.

As Lukas Oldenburg explains on Quora, a metric is usually something you can count, while a dimension is what you are applying the metric to. So for example, the dimension 'Page Title' can be analyzed via the metrics 'Pageviews', 'Unique Pageviews', 'Time on Page', 'Exit Rate' and so on.

If you want to know how effective your content is, I propose that the only relevant analytic is overall traffic trends, and page-level traffic trends. At NewPath Consulting we’ve come up with a report called Average Page Views Per Month, which is a customized dashboard that measures how well your content performs. It harnesses the most relevant content-related data collected by Google Analytics, so businesses can use that data to drive their content decisions.

Here is the formula:

Average Pageviews Per Month = Pageviews Age of Post (in Months)


The average pageviews per month tells you the content that has the most consistent views over the longest period of time. You’ll see how a high-quality, evergreen piece of content resonates with your audience over a long period.

To create a piece of quality content that generates traffic takes a lot of deliberation, thought and effort. For example, our team spent two months preparing the content of a recent webinar about how to run your small business with online forms, and another five hours to create, edit and distribute the follow up blog post.

This effort pays off, because creating new content will increase your overall high-quality traffic. Some posts endure and some posts decay. Analytics can help you predict these results and ensure you’re creating the right kind of content.


How to set up an Average Page Views Per Month report for your own website

Assuming you have already set up the Google Analytics add-on in your Google Sheets (here are instructions), now you’ll have to configure your report to get the information we’re looking for.

Here are the configuration options (click on the image for full size):

Google Analytics Configuration Options
Google Analytics Add On Configuration Options

Let’s go through each of these fields:

The Profile ID is provided by Google Sheet add on. Your start date should be when you started collecting analytics, and your end date should be the current date (use the =today() Google Sheet function to automatically fill in the end date).

Beyond the reports dashboard, Google Analytics is a data warehouse of all the metrics and dimensions that are collected and summarized all the time. The dictionary of dimensions and metrics are available for easy searching in the handy Dimensions & Metrics Explorer.

The metrics we will select for each blog post are pageviews, unique pageviews, average time on page, entrances, bounce rates, and exit rate.

The dimensions is the page URL (without the domain name), sorted by pageviews in descending order, denoted by the - sign before ga:pageviews.

The next is part is key: to filter out the blog posts from the static pages on your site (such as About, Services, etc.). How do we do this?

Have you ever filled out a form and received an error message such as, “this is not a valid email address,” or “passwords do not match”? Did you ever wonder how the form knew that? The web developers who created these forms used a pattern matching language known as a regular expression (also known as regex).

In order to filter out your website’s static pages and analyze only your blog posts, we’re going to use a regex to configure the filter row of our Google Analytics spreadsheet. If you were to leave this row empty, you would get results from all the pages on your site.

You have to establish a pattern that tells Google Analytics how to identify your blog posts and differentiate between the static pages and blog posts on your site. The blog posts on our website uses a URL pattern of sitename.com/YYYY/MM/blog-postname-html, so our regex filters all page URLs that start with with /YYYY/MM.  You can look at the URL of any of your blog posts to find out how your blog’s permalinks are structured.

You have to establish a pattern that tells Google Analytics how to identify your blog posts and differentiate between the static pages and blog posts on your site. The blog posts on our website use a URL pattern of sitename.com/YYYY/MM/blog-postname-html, so our regex filters all page URLs that start with with /YYYY/MM. You can look at the URL of any of your blog posts to find out how your blog’s permalinks are structured.

You’ll need to experiment. See what results are returned with the set of regular expressions you used, and then tweak them as needed. If you need more help, get in touch!

Now that we’ve created the filter, we want to set up an automatically-generated analysis page that will show you, at a glance, what we’re measuring

Here is our finished report configuration, where you can see the highlighted column showing the average pageviews per month (click for a larger view):

Google Analytics URL Report in Google Sheets 

When you have this kind of history of content and its performance, you can make smarter decisions about what to write more about and what to write less about or not at all.

In next month’s article we will document how to configure and use the Average Pageviews per Month metric and a few other interesting reports we can generate.


Why Cloud Accounting?

by Upen Shah

Cloud-over-traditional-accounting (2)

Let me begin by asking how many of you have moved from flip phones to smart phones. It will be safe to assume that most of you have. Now, think about why you moved – not because the smart phones provided better phone service. The overwhelming reason for the move was the number of applications available on smart phones that provided value that was simply not available on flip phones. Over time, the value of these applications grew to such an extent that smart phones that did not offer 'apps' lost their value (just ask Blackberry!). We are now in an era where the phone part of the smart phone is just another app alongside many other apps.

What happened to the smart phone industry in 1990’s is happening now to professional services – especially with accounting services. Several established players as well as new startups, such as Intuit, Sage Simply Accounting, FreshBooks cloud accounting and Xero are offering cloud accounting options (as opposed to traditional desktop based “software in a box” model). While desk top software still forms the bulk of the market, the trend is moving more and more towards the cloud. For example, Intuit’s QuickBooks Online now has more than 1.2 million users – up more than 49% from previous year, while Xero has 600,000 users (1). This trend is going to accelerate as the industry moves towards its growth stage.

For small and medium business (SMB) owners, the benefits of moving to the cloud include the ability to outsource the bookkeeping and accounting functions instead of hiring an in-house bookkeeper /accountant. With no upfront costs for IT infrastructure, software purchases, document storage, etc. moving to a cloud accounting solution is a very affordable option. Cloud Accounting offers a pre-built, pre-configured solution that’s operational from day one. Traditionally, the small business owners have struggled with their ability to pay for hardware and software costs, to keep pace with changing technologies, securely storing and managing documents, ensuring compliance with tax laws and payroll regulations. Cloud accounting with integrated apps for document management, payroll, inventory and much more solves these problems easily and inexpensively. In addition, cloud accounting provides state-of-the-art data analytics. Availability of up-to-the-minute financial data, profit/loss picture, effective budgeting and forecasting and analytical capabilities puts SMBs at par with large enterprises.

Two other trends that will make the switch to cloud accounting more compelling: ability to access the accounting software from your mobile phones and the growing number of applications that integrate with cloud accounting software. QuickBooks Online has more than 2000 applications (1). This will be the critical tipping point for cloud accounting. A good number of SMB owners, especially sole proprietors, don’t have an office, while others conduct more and more business outside of their office. The ability to review and manage their finances while on the go from their mobile phones will be of extreme importance. Applications that manages receivables, payables, payroll, time tracking, inventory and many other functions will form the backbone of a system similar to apps on a smartphone and provide value that few can imagine right now.

Remember, these technological changes are not restricted to accounting services only. Just look at the wrist watch industry – where new so called smart watches from likes of Apple, Samsung, Pebble and Fitbit are replacing wrist watches that can measure your heart rate, how many calories you’ve burned, how long and how well you slept, tell you that who is calling your phone and help you make calls and of course tell you the time. The relentless march of technology will continue to transform businesses for the foreseeable future. It’s for you to decide whether to join and benefit from the cloud revolution, or just fight or ignore it. Just remember, you cannot do today’s job with yesterday’s technology and be in the business tomorrow.

(1) Intuit sheds its PC roots – New York Times –April 10,2016

Upen is a tech-savvy cloud accountant with a passion for helping small and medium businesses achieve financial success. You can contact him at upenshah@upenyourbooks.com


Google Analytics Referral Spam Removed Automatically

Google Analytics referral spam has long been a pain point for Google Analytics users.  While Google had previously promised a solution last year, until now, users had to resort to many workarounds in an attempt to sanitize Analytics from showing all the spam.  But as of this weekend, many people were reporting that Google Analytics referral spam is no longer showing up in reporting. Unfortunately there is nothing formal yet from Google on the issue.

The change does not appear to be retroactive, as we do see referral spam still in Google Analytics in January 2016 and before.  But as of February 2016, it seems that Google has finally solved the Google Analytics spam problem, as it appears no referral spam shows up – at least none of the commonly known ones.

One effect this will have on your overall traffic is a marked decrease in total visits and page views. This traffic was never from people anyways, so it is a better indication of real traffic from humans (rather than spam robots!).


GoDaddy Research Survey Says: "Most Very Small Businesses Aren't Fully Utilizing the Internet"

Two decades into the Internet becoming mainstream, you would think that nearly every business has planted their flag online. But in actuality the majority of the smallest of businesses aren’t fully plugged into the Internet, according to a landmark global survey commissioned by GoDaddy from Redshift Research.

59% of them don’t have a website – and, thus, full control of their online presence – according to the survey of 4,000 global very small businesses (defined as five workers or less) in Australia, Brazil, Canada, India, Mexico, Turkey, United Kingdom and United States.

Who are these small businesses?

41% are run by women.

And they are generally not only small in employees but customer base, with 64% having 100 or fewer customers.

And many are new: 39% have been in business for three years or less.

While many of these very small businesses do have some form of Internet presence through social media platforms, they reported feeling that their operation was simply too small to warrant a website (35% of respondents). Others cited a lack of technical expertise (21%) or the costs of starting a website (20%).

Read the complete survey results here.


Our first webcast with Mad Mimi

We were honoured to be part of Mad Mimi's regular Wednesday afternoon webcast. If you missed it you can read up and watch the whole interview on the Mad Mimi YouTube channel. Or just check it out below.