Categories
tutorial

Modificare con QGis le geometrie di una mappa online con PostGIS e LeafletJS

Questa soluzione ci permette di aprire con QGis i dati cartografici pubblicati su una mappa web, pubblicata ad esempio con LeafletJS. Sto usando un sistema operativo Windows 10 e mediante il software PuTTY, creo un tunnel ssh alla porta 5432 di un server con PostGIS. La modifica dei dati quindi è possibile farla con il potente software QGis, che legge nativamente la connessione con un database Postgres; in questo i modo i dati, estratti direttamente dal database in formato GeoJson, vengono visualizzati subito sulla mappa web.

Categories
Servizi slack-gis-tips tutorial

Introducing CARTO SalesQuest: Location-Based Sales Analytics

We’ve talked a lot on this blog about how different business functions use Location Intelligence in very different ways to solve very different problems.

This has led customers to ask us if we would ever build specific solutions for use cases. Today, with years of experience and thousands of customer applications deployed, we’re excited to announce our first solution built on the CARTO platform: CARTO SalesQuest.

CARTO SalesQuest is a location-based Sales Analytics solution that applies spatial analysis and location data streams to your company’s sales CRM data, boosting your team’s sales performance.

Analyzing sales data and making strategic decisions to improve your sales team’s performance has traditionally relied on sales analytics tools that focus on when sales happen: How long is my team’s sales cycle? What is our pipeline for the next three months? What is our win rate for last fiscal quarter?

These are vitally important questions for a sales team, but in order to truly optimize your sales performance in real-time, you need the ability to ask questions about where sales happens:

  • Where are there high-value opportunities in my team’s sales territory?
  • Where should I hire, assign, and deploy additional field reps based on opportunity value?
  • Where might there be potential new customers based on open data about demographics or consumer trends?

CARTO SalesQuest puts the power of location right in your sales rep’s hands, helping them to find locations of nearby prospects, visualize their customers according to time of last touch, or even prioritize sales visits based on opportunity value.

Boosting sales performance with location

We’ve spoken with lots of sales leader about optimizing sales practices around: customer segmentationsales territory design, and territory management.

While developing CARTO SalesQuest, we asked sales leaders what was most important for them in a sales analytics solution. Here’s what they said:

  • Role-Based Access. Users can assign view access according to your organization’s internal structure to maintain workflows for existing sales territories.
  • Mobile & Desktop Ready. Access SalesQuest in the office or in the field with a responsive design that adjusts to whatever size devices you and your sales team uses.
  • CRM ready. SalesQuest is ready to plug and play with your existing CRM system.

These features are important, but we also knew that in order for sales teams to optimize in real-time, we would have to augment their CRM sales data with other location data streams.

What does this look like in the world of field sales?

A leading security company we work with was able to give their sales reps data about their opportunities that other companies weren’t able to provide. They used Open Data on crime statistics across the different cities their reps were assigned to in order to identify potential new business for alarm and security service sales.

CARTO SalesQuest

Learn more about how location-based Sales Analytics can boost sales performance.

GET STARTED

Optimize sales visit schedules

In a recent Salesforce study, a large majority of sales representatives cited internal ineffieciencies as the cause for their team’s productivity gap. In fact, respondents admitted that on average only ⅓ of their work week is spent selling while the rest is spent on more administrative tasks.

Addressing this challenge is all about making the field experience more intuitive, making selling as easy as planning a route in Google Maps or booking a hotel in the ideal location on Booking.com. That’s precisely why SalesQuest is built on CARTO’s simple interactive map interface so that planning an efficient business trip is data-driven, but still simple. This allows your reps to:

  • Visualize the location of nearby prospects, customers and sleepers. Minimize travel costs using route optimization when planning visits
  • See customers according to time of last touch. Maintain customer satisfaction with balanced coverage across sales area
  • Visualize nearby renewal opportunities. Reduce churn rates with visits to customers with expiring contracts
  • Identify highest value opportunities. Prioritize sales visits by highest value customers and prospects

For one client, equipping field sales reps with SalesQuest has led to a 6% increase in ASPs recorded by sales representatives, a 9% increase in the number of clients visited per month, and a 12% decrease in travel time for sales representatives.

Identify sub-optimal sales behavior

Operations managers can also take advantage of the role-access view to analyze trends and patterns in sales behavior that could be putting their quota well out of reach.

Recently, a company’s head of sales for Europe found that the average selling price on new transactions had been decreasing significantly. The head of sales wanted to figure out how and where her sales team could change behaviors to make sure this trend didn’t continue.

The image below shows the map of CRM’s sales data, filtered to new business opportunities with an average price of $60,000 or less within the European sales region.

SalesQuest Demo

The distribution of opportunities is spread out across the continent, which doesn’t yet provide the head of sales with actionable insights on how best to change sales behavior. But filtering the sales data down to the time period when the most sales are closed, which tends to be the end of the quarter, may help identify which area is in need of help.

In the image below we see a drastic difference in amount of low-dollar new transactions during the end of the sales quarter in and around Germany.

SalesQuest Demo

Upon a closer look we can pinpoint that the most low-dollar new business transactions occur in the city of Hamburg, Germany.

SalesQuest Demo

This granular insight allowed the head of sales to begin implementing changes for this specific team on the ground, refocusing them on higher-value new opportunities, business expansion, and renewal opportunities.

Our team is ready and waiting to hear from you. What are your biggest sales optimization challenges? Where do you feel like you have sales blind spots? Reach out to our team to start a conversation!

Categories
slack-gis-tips tutorial

Movement data in GIS #9: trajectory data models

Movement data in GIS #9: trajectory data models

There are multiple ways to model trajectory data. This post takes a closer look at the OGC® Moving Features Encoding Extension: Simple Comma Separated Values (CSV). This standard has been published in 2015 but I haven’t been able to find any reviews of the standard (in a GIS context or anywhere else).

The following analysis is based on the official OGC trajcectory example at http://docs.opengeospatial.org/is/14-084r2/14-084r2.html#42. The header consists of two lines: the first line provides some meta information while the second defines the CSV columns. The data model is segment based. That is, each line describes a trajectory segment with at least two coordinate pairs (or triplets for 3D trajectories). For each segment, there is a start and an end time which can be specified as absolute or relative (offset) values:

@stboundedby,urn:x-ogc:def:crs:EPSG:6.6:4326,2D,50.23 9.23,50.31 9.27,2012-01-17T12:33:41Z,2012-01-17T12:37:00Z,sec
@columns,mfidref,trajectory,state,xsd:token,”type code”,xsd:integer
a, 10,150,11.0 2.0 12.0 3.0,walking,1
b, 10,190,10.0 2.0 11.0 3.0,walking,2
a,150,190,12.0 3.0 10.0 3.0,walking,2
c, 10,190,12.0 1.0 10.0 2.0 11.0 3.0,vehicle,1

Let’s look at the first data row in detail:

  • a … trajectory id
  • 10 … start time offset from 2012-01-17T12:33:41Z in seconds
  • 150 … end time offset from 2012-01-17T12:33:41Z in seconds
  • 11.0 2.0 12.0 3.0 … trajectory coordinates: x1, y1, x2, y2
  • walking …  state
  • 1… type code

My main issues with this approach are

  1. They missed the chance to use WKT notation to make the CSV easily readable by existing GIS tools.
  2. As far as I can see, the data model requires a regular sampling interval because there is no way to store time stamps for intermediate positions along trajectory segments. (Irregular intervals can be stored using segments for each pair of consecutive locations.)

In the common GIS simple feature data model (which is point-based), the same data would look something like this:

traj_id,x,y,t,state,type_code
a,11.0,2.0,2012-01-17T12:33:51Z,walking,1
a,12.0,3.0,2012-01-17T12:36:11Z,walking,1
a,10.0,3.0,2012-01-17T12:36:51Z,walking,2
b,10.0,2.0,2012-01-17T12:33:51Z,walking,2
b,11.0,3.0,2012-01-17T12:36:51Z,walking,2
c,12.0,1.0,2012-01-17T12:33:51Z,vehicle,1
c,10.0,2.0,2012-01-17T12:35:21Z,vehicle,1
c,11.0,3.0,2012-01-17T12:36:51Z,vehicle,1

The main issue here is that there has to be some application logic that knows how to translate from points to trajectory. For example, trajectory a changes from walking1 to walking2 at 2012-01-17T12:36:11Z but we have to decide whether to store the previous or the following state code for this individual point.

An alternative to the common simple feature model is the PostGIS trajectory data model (which is LineStringM-based). For this data model, we need to convert time stamps to numeric values, e.g. 2012-01-17T12:33:41Z is 1326803621 in Unix time. In this data model, the data looks like this:

traj_id,trajectory,state,type_code
a,LINESTRINGM(11.0 2.0 1326803631, 12.0 3.0 1326803771),walking,1
a,LINESTRINGM(12.0 3.0 1326803771, 10.0 3.0 1326803811),walking,2
b,LINESTRINGM(10.0 2.0 1326803631, 11.0 3.0 1326803811),walking,2
c,LINESTRINGM(12.0 1.0 1326803631, 10.0 2.0 1326803771, 11.0 3.0 1326803811),vehicle,1

This is very similar to the OGC data model, with the notable difference that every position is time-stamped (instead of just having segment start and end times). If one has movement data which is recorded at regular intervals, the OGC data model can be a bit more compact, but if the trajectories are sampled at irregular intervals, each point pair will have to be modeled as a separate segment.

Since the PostGIS data model is flexible, explicit, and comes with existing GIS tool support, it’s my clear favorite.

Categories
tutorial

Introducing Voyager: The New CARTO Basemap

The basemap is the unsung hero of cartography.

Regardless of the intended purpose, scale, or theme, the basemap must always provide the proper context, use a clear visual language, and effectively communicate the underlying data.

For that reason, we’re constantly working to improve our basemaps and have decided to release a new version: Voyager.

Designed by our talented Head of Cartography, Mamata Akella, this mobile-optimized basemap comes in three flavors (labels on top, labels under, and no labels). In addition, this iteration makes it even easier to incorporate vital street-level data, better understand the potential flow of traffic within an area, and create stunning, intelligible maps.

Door to Door Service: Building Addresses Now Available

At the largest zoom, individual building addresses now appear directly on the map, allowing users to operate at the level of the street.

For example, a real estate firm can now use Voyager to track and manage its properties. Or, city planners can visualize and more quickly respond to all of the residencies and businesses that have made 311 calls, reported crimes in their area, or need to have their driveways ploughed for snow.

A financial firm can even create a map that easily visualizes its highest value properties and identify where additional investments and opportunities are located without additional building data.

Main Boulevard, not Main Street: Visualizing the Roadways

Because so many of our users rely on CARTO to show how people move within an area, we wanted to emphasize the arteries of any place: the streets.

Under Positron, every avenue, highway, and back-woods road was depicted using the same white color.

In Voyager, we’ve used different colors to show a clear hierarchy of highways, major roads, and minor roads. We’ve also labeled each one, which is more than just an aesthetic improvement.

Voyager Positron

Under this new system, users can identify specific road segments and intersections, giving their maps yet another level of complexity.

For example, after a natural disaster, first responders can identify streets that have been damaged and assign service providers accordingly. While the crews work on repairs, officials can devise the safest and most efficient detours, allowing the entire system to recover as quickly as possible.

And, just like Positron and Dark Matter, all information about roads, buildings, and points of interest is regularly updated and maintained for accuracy.

Tiny Tweaks: A Host of New Design Features

Voyager Darkmatter

We’ve also made several subtle design changes in order to ensure that your maps are as clear as possible.

City icons are now squares instead of circles to eliminate any confusion. Bodies of water now have drop shadows to further emphasize important boundaries.

Labels are no longer written in all caps for greater legibility, and we’ve limited their size to remove unnecessary clutter. And, as a user zooms further, more labels appear, which allows maps to contain rich details without becoming overcrowded.

If you need a few weeks to adjust to this new basemap, no need to worry. All of your existing maps will remain as they are. However, anything new that you build will be on Voyager, except for Legacy Editor maps, which will still use Positron. And if the only change you can handle at the moment is the transition from summer to fall, you’re in luck. Positron and Dark Matter will still be available in the basemap selector.

Categories
Map tutorial slack-gis-tips tutorial

Quick reference guides for learning individual CARTO Builder features

Quick reference guides for learning individual CARTO Builder features.

Categories
slack-gis-tips tutorial

Set Up a PostGIS Database With Docker

For GIS users who haven’t heard of PostGIS:

PostGIS is a spatial database extender for PostgreSQL object-relational database. It adds support for geographic objects allowing location queries to be run in SQL.

In this tutorial post we’ll cover how to install and run a PostGIS database on Windows using Docker, a popular containerization platform. We’ll then connect to the database and use it in both QGIS and ArcGIS Pro.

Why PostGIS?

PostGIS logo

If, up until this point in time you have only used flat-files to store your spatial data, such as shapefiles, file geodatabases, or SpatiaLite databases you might not know what the use case would be for using software like PostGIS. In short, PostgreSQL (or postgres, for short) is a database server, meaning it is a program that accepts connections from one or more clients, and allows them to interact with the data the server manages. PostGIS is an extension that is installed on top of PostgreSQL and gives the database server the means to store and manipulate spatial data, as well as perform spatial calculations and analysis. PostGIS can perform spatial operations on both vector and raster data.

A more comprehensive comparison of PostGIS vs flat-file databases can be found here.

Why Docker?

Docker logo

Docker is a beast, it can be a bit intimidating for new users who aren’t familiar with the command-line, but it provides you with a means to build and run software in a very consistent and controlled way by building upon a technology called LXC containers.

What does that mean for the average GIS’er? It means that you can download and run software like Postgres + PostGIS on any machine with minimal configuration very quickly. For anyone who has tried to install Postgres + PostGIS on Windows before that’s kind of a big deal, because it can be a giant pain in the ass. That’s not to say that using Docker can’t be frustrating, but I’ve found it to be very quick and intuitive once I got the hang of it.

Install Docker Toolbox

Now, I wish this step could be as easy as saying “go here and download installer X”, but unfortunately it’s not that simple. Docker, being an enterprise-grade containerization software has many different versions you can install, depending on your host operating system.

Docker for Windows requires Windows 10 with Hyper-V virtualization enabled, which is only available on Windows 10 Professional or Enterprise 64 bit versions. I will not be using this software in this tutorial, and can’t confirm that the steps will be the same, although the will most likely be very similar.

Docker Toolbox is available for other versions of Windows as well as Mac, and is the version I will be using for this tutorial. It uses VirtualBox to run your containers in conjunction with Docker.

After you have downloaded and installed Docker Toolbox, you should now be able to launch the Docker Quickstart Terminal from the Windows start menu.

Running a Docker Container

Screenshot of the Docker quickstart terminal.
Note the IP address in the welcome text at the top.

When you installed Docker Toolbox, it installed and configured a Linux virtual machine on VirtualBox, where your containers will actually run. Lets test out our installation by running a container running the NGINX web server. This will not only prove that our container is running, but that we can access it through the network abstrations created by VirtualBox and Docker.

In the terminal, type the following command:

docker run --name=nginx -d -p 80:80 nginx

Lets disect this command bit by bit:

  • docker run is the command to tell Docker you want to run a container. There are loads of other commands, simply type docker to see them all. We’ll cover some other commands shortly.
  • --name=nginx Specifies the name of the container we’re about to create. When working with Docker you can refer to a container either by its name, or its ID, which is a random alpha-numeric string. If you don’t specify a name, it will be generated out of two randomly selected words separated by an underscore.
  • -d tells Docker to run this container in a detached state (i.e. in the background)
  • -p 80:80 tells Docker to map the host’s port 80 to the containers port 80. This is important, as containers will not automatically expose their ports to the outside. As we’re running a web server, which serves HTTP requests over port 80, this command allows requests from outside the container to reach the web server. We’ll expose a port in a similar way when we run our PostgreSQL database later.
  • nginx is the container we would like to run. Docker includes the ability to pull containers from Docker Hub, which is a site with a collection of official and user-generated containers that are ready to use.

After you’ve ran the command you should get an output similar to this:

$ docker run --name=nginx -d -p 80:80 nginx
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
bc95e04b23c0: Pull complete
aee0c172e58a: Pull complete
c2a5d8ccfabc: Pull complete
Digest: sha256:adea4f68096fded167603ba6663ed615a80e090da68eb3c9e2508c15c8368401
Status: Downloaded newer image for nginx:latest
1743234776d03e8a39cd851324e68f93919d0ba13672d6a7d60b80c4558ee90f

Docker downloads the necessary images in order to run the NGINX container, then creates a new container called nginx. The last line is the container ID. As mentioned in the welcome text when you started the terminal, the containers will be running on a VM with a specific IP address, in my case it is 192.168.99.100 (yours may be different, it might be worthwhile to check).

Another way to get the IP address of your Docker VM is to run the command:

docker-machine ip

In your web browser, navigate to the IP address of your Docker VM. You should see the NGINX welcome screen, which demonstrates that the container is running and serving requests on the port we exposed.

Screenshot of the nginx welcome screen.
Navigate to the IP.

Listing running containers

With Docker you can have many containers running at the same time. To list them you can use the command docker ps. It should output something similar to this:

$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                NAMES
99fa53dffb90        nginx               "nginx -g 'daemon ..."   5 minutes ago       Up 2 minutes        0.0.0.0:80->80/tcp   nginx

Stopping & Deleting The Container

To stop and remove the NGINX container you use the docker stop and docker rm command. Containers must be stopped before they can be removed.

Run the command:

$ docker stop nginx

It will then print the name of the container that was stopped. Next, if you run docker ps again you will see that the nginx container is no longer running. If you’d like to list all containers, including the ones that have stopped, you can run the command docker ps -a to include stopped containers. It should output something similar to the following. Note the STATUS field to see when the container exited:

$ docker ps -a
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS                          PORTS               NAMES
99fa53dffb90        nginx               "nginx -g 'daemon ..."   9 minutes ago       Exited (0) About a minute ago                       nginx

Lastly, you can delete the stopped container by using the docker rm command. It will print the name of the container after it has been deleted.

docker rm nginx

Running PostGIS

Congratulations! You’ve ran and managed your first Docker container! The next step is to get our spatial database running.

One of the advantages of Docker I mentioned before is the ability to leverage both official and community built containers to get your software running quickly no matter what machine you’re running it on. If you look through Docker Hub there are a few PostGIS containers available for you to use. For this example I will be using the Dockerfile (a set of instructions used to create a finished container) provided by the kartoza/postgis repository.

Persisting Your Data

Before we create the database we need to think about how our database info will be stored within Docker. Normally when you create a Docker container, you are not meant to be able to enter the containers filesystem to copy or modify data. This means that unless we specify what’s called a volume in Docker, our database data will be saved inside the container, making it difficult or impossible to perform backups or upgrades to the database software without losing all your data.

What we’re going to do is create a volume container that will be used to persist PostgreSQL database files outside of the the container that runs the database process:

docker volume create pg_data

If you wish to read more about using volumes with Docker you can read the documentation here.

Creating the Database Container

Next we’ll use docker run to create the PostGIS container:

docker run --name=postgis -d -e POSTGRES_USER=alex -e POSTGRES_PASS=password -e POSTGRES_DBNAME=gis -e ALLOW_IP_RANGE=0.0.0.0/0 -p 5432:5432 -v pg_data:/var/lib/postgresql --restart=always kartoza/postgis:9.6-2.4

Lets break down this command part-by-part. More examples of how you can run this container can be found on Docker Hub

  • docker run --name=postgis tells Docker our new container will be named postgis
  • -d run the container in the background (detached mode)
  • -e POSTGRES_USER=alex the -eflag sets an environment variable inside the container. This one is used to configure name of a login role in PostgreSQL that will have superuser (admin) priviliges in the database. You can rename this to whatever you want.
  • -e POSTGRES_PASS=password sets an environment variable that will set the password of the login role to `password. You can set this to whatever you want.
  • -e POSTGRES_DBNAME=gis much like you can guess, the environment variable tells the container to create a new database on the server with the name gis. After the database is created then the PostGIS extension will be enabled on it.
  • -e ALLOW_IP_RANGE=0.0.0.0/0 tells the container to configure PostgreSQL to accept connections from anyone. If you did not set this then the database would only accept connections from addresses using the Docker networking subnet.
  • -p 5432:5432 maps the port 5432 on the host VM to port 5432 on the container. This is required because the database server listens for connections on port 5432 by default.
  • -v pg_data:/var/lib/postgresql tells the container filesystem to mount the pg_data volume we just created to the path /var/lib/postgresql. This means that any data that the container saves or creates in that directory will instead be saved in the pg_data volume.
  • --restart=always creates a restart policy for your container. Now your container will start every time the Docker virtual machine starts. If this was not set, you would have to manually start the container every time the VM booted up with docker start postgis
  • kartoza/postgis:9.6-2.4 tells Docker to pull the kartoza/postgis repository from Docker Hub, using PostgreSQL version 9.6 and PostGIS version 2.4. You can see other versions that are available on Docker Hub

That command is definitely a mouthful. Once the images have downloaded you should then see that the container has started by using docker ps:

CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                    NAMES
748ba3fabd31        kartoza/postgis:9.6-2.4   "/bin/sh -c /start..."   About an hour ago   Up About an hour    0.0.0.0:5432->5432/tcp   postgis

If you want to see log output from your container you can do so by using docker logs

$ docker logs postgis

PostgreSQL stand-alone backend 9.6.5
2017-10-29 19:10:41.256 UTC [24] LOG:  could not bind IPv6 socket: Cannot assign requested address
2017-10-29 19:10:41.256 UTC [24] HINT:  Is another postmaster already running on port 5432? If not, wait a few seconds and retry.
2017-10-29 19:10:41.256 UTC [24] WARNING:  could not create listen socket for "::1"
backend> backend> postgres ready
2017-10-29 19:10:41.277 UTC [27] LOG:  database system was shut down at 2017-10-29 19:10:41 UTC
2017-10-29 19:10:41.278 UTC [28] [unknown]@[unknown] LOG:  incomplete startup packet
2017-10-29 19:10:41.279 UTC [27] LOG:  MultiXact member wraparound protections are now enabled
2017-10-29 19:10:41.285 UTC [24] LOG:  database system is ready to accept connections
2017-10-29 19:10:41.285 UTC [32] LOG:  autovacuum launcher started
Postgis is missing, installing now
Creating template postgis
Enabling template_postgis as a template
UPDATE 1
Loading postgis extension
CREATE EXTENSION
Enabling hstore in the template
CREATE EXTENSION
Enabling topology in the template
CREATE EXTENSION
                                 List of databases
       Name       |  Owner   | Encoding | Collate |  Ctype  |   Access privileges
------------------+----------+----------+---------+---------+-----------------------
 gis              | alex     | UTF8     | C.UTF-8 | C.UTF-8 |
 postgres         | postgres | UTF8     | C.UTF-8 | C.UTF-8 |
 template0        | postgres | UTF8     | C.UTF-8 | C.UTF-8 | =c/postgres          +
                  |          |          |         |         | postgres=CTc/postgres
 template1        | postgres | UTF8     | C.UTF-8 | C.UTF-8 | =c/postgres          +
                  |          |          |         |         | postgres=CTc/postgres
 template_postgis | postgres | UTF8     | C.UTF-8 | C.UTF-8 |
(5 rows)

2017-10-29 19:10:43.514 UTC [24] LOG:  received smart shutdown request
2017-10-29 19:10:43.514 UTC [32] LOG:  autovacuum launcher shutting down
2017-10-29 19:10:43.517 UTC [29] LOG:  shutting down
2017-10-29 19:10:43.524 UTC [24] LOG:  database system is shut down
Postgres initialisation process completed .... restarting in foreground
2017-10-29 19:10:43.564 UTC [126] LOG:  database system was shut down at 2017-10-29 19:10:43 UTC
2017-10-29 19:10:43.566 UTC [126] LOG:  MultiXact member wraparound protections are now enabled
2017-10-29 19:10:43.568 UTC [123] LOG:  database system is ready to accept connections
2017-10-29 19:10:43.568 UTC [130] LOG:  autovacuum launcher started

Connecting In QGIS

You should now be able to add the connection to PostGIS in the browser panel in QGIS. Right click the PostgreSQL icon and click “Create New Connection”, and enter the database connection parameters you used in the docker run command. The “Host” parameter will the the IP of your Docker VM (you can get it by running docker-machine ip in the terminal window)

Creating Or Importing Geometry

QGIS has a geo algorithm called “Import into PostGIS” that can load layers from various sources into your PostGIS database.

You can also use the DB Manager plugin to create new database schemas, as well as import and export files from the database. In the screenshot below I have loaded an extract from OpenStreetMap of roads in the Ottawa area.

Screenshot of QGIS depicting roads in the Ottawa area.

Connecting In ArcGIS Pro

Screenshot of the Catalog pane in ArcGIS Pro

You can also perform visualization and analysis on your data in ArcGIS Desktop. There are, however, some limitations on what kinds of operations you can perform. During my limited testing using ArcGIS Pro 2.0 I was not able to perform feature editing, or data management tasks such as modifying tables (adding/dropping fields, etc). I was able to load and visualize the data, as well as run geoprocessing tools on the data. I was able to export the geometry to a file geodatabase, as well as delete entire tables. I think this is either a licensing issue, or I have to install some extra shared libraries into PostgreSQL in order to use the ST_Geometry type.

Creating A New Database Connection

Connecting to your database is as simple as adding a new database connection in either ArcCatalog, or in the Catalog pane in ArcGIS Pro. You can use the same credentials and connection information you used in the environment variables when you created the container. The “Host” parameter is set to the IP of your Docker VM (you can get it by running docker-machine ip in the terminal window, if you don’t have it).

Creating A New Schema

When I first tried to view the road data I had loaded into the database in ArcGIS Pro, I was unable to find it in the catalog pane. I later discovered this is because by default ArcGIS looks into a database schema with the same name as the currently connected database user. In my case the login name for PostgreSQL is alex, so ArcPro is looking for tables in the schema alex. Creating a new schema is easyily accomplished using the DB Manager plugin in QGIS, however if you don’t have QGIS installed on your machine you can run a SQL query using Docker to create it.

To create a new schema in your PostGIS database, run the following command in the quickstart terminal. Make sure to replace the <PASSWORD><DBNAME> and <USERNAME> parameters with the ones you used when you created the container.

docker exec -it postgis /bin/bash -c "PGPASSWORD=<PASSWORD> psql -d <DBNAME> -U <USERNAME> -h localhost -c \"create schema <USERNAME>;\""

What this command does is execute a command inside the postgis container. In this case, it is executing a SQL command using psql, a command-line PostgreSQL client that will create the schema you need.

Viewing Data

In this state utilizing ArcGIS with PostGIS is kind of a chicken and the egg type of situation, as I was not able to create new data in the database, although I was able to view and analyze already existing data. After the new schema was created, and a roads shapefile was imported into that schema using QGIS, I was then able to successfully bring the data in and view it in ArcGIS Pro

Screenshot of QGIS depicting roads in the Ottawa area.
Notice the layer is named with the database name, schema name, then the table name.

Conclusion

In this over-arching tutorial we went over the basics of how to use a spatial database server, and how to deploy one using Docker. There’s a lot more you can accomplish using container-based software in your projects, and I hope to write about them later. Some things I have been experimenting with is deploying Geoserver alongside PostGIS using Docker. I’m also interested in exploring if there’s an easy way to gain more functionality on the Esri side of the house, as well as seeing if I can get volume mount-points outside of the VirtualBox VM and into the host OS.

Categories
slack-gis-tips tutorial

New OSM routing plugin for QGIS OSM Tools

Two years ago I wrote the OSMroute plugin which enable the QGIS user to use the OpenRouteService API for geocoding of points, accessibility analysis as well as routing from A to B. Unfortunately the managing of a plugin/ open source tool is time-consuming/ hard. But fortunately Nils Rode lifted the plugin to a new level: OSM tools.

OSM Tools Plugin

The plugin can be installed as a normal plugin via the plugin manager:

OSM tools plugin in QGIS plugin manager

OSM tools plugin in QGIS plugin manager

The usage is quite easy and enable you to geocode/reverse geocode addresses, find routes, create an isochrone map. Yet as the API from openrouteservice.org is an open one there are some limitations on the API:

  • 500 free requests per day in total
  • 40 requests per minute
OSM tools interface

OSM tools interface

You could be easily affected by these restrictions as the API allows some batch jobs. The batch mode enables you row-by-row or row-by-any routing.

batch routing

different types of batch modes.

The result is really good: a clean feature table with all the needed information of the routes:

OSM Tools Batch mode result

OSM Tools Batch mode result

Nils did a great job on this one and I encourage everyone to use this with your own API key, look for issues with it (and report them!) and help this plugin to grow!

Categories
tutorial

QGIS-Plugin: GeoSearch

m QGIS nach Adressen suchen, Distanzen oder Routen berechnen? Kein Problem mit dem QGIS-Plugin „GeoSearch“ von Walter Tsui. Genutzt werden die Daten aus dem Google Maps Service. Ich hab’s probiert, hier mal die Adress-Suche

Categories
tutorial

Dynamic styling expressions with aggregates & variables

In a recent post, we used aggregates for labeling purposes. This time, we will use them to create a dynamic data driven style, that is, a style that automatically adjusts to the minimum and maximum values of any numeric field … and that field will be specified in a variable!

Categories
tutorial

georeferencing images in QGIS using OSM data: how to distort the City

You probably already referenced maps with written coordinates but using photos or images is a little different as you need some characteristic points so you can control the position of the image according to the real world. In this little tutorial I’ll show you how to do a so-called image to image refernciation using Open Streetmap data and QGIS 2.0.1 Dufour.