Right Outer Join

12 April 2018

Find functions in a PostgreSQL schema

Filed under: Master Data Management — mdahlman @ 11:36

How to list all the functions in a PostgreSQL schema

Premise

I have a schema with some functions in it. Here’s an example of how I defined some functions using extensions:

CREATE SCHEMA extensions;
GRANT USAGE ON SCHEMA extensions TO PUBLIC;
ALTER DEFAULT PRIVILEGES IN SCHEMA extensions GRANT EXECUTE ON FUNCTIONS TO PUBLIC;
ALTER DATABASE semarchy4 SET SEARCH_PATH TO "$user",public,extensions;
 
CREATE EXTENSION IF NOT EXISTS "uuid-ossp" with schema extensions;
CREATE EXTENSION IF NOT EXISTS "fuzzystrmatch" with schema extensions;

Question

Exactly what functions do I have now?

Answer

Command line

The psql command line utility makes it easy to find this info. For example:

$ psql -U semarchy_b2b_tutorial semarchy4
psql (10.1)
Type "help" for help.

semarchy4=> \df extensions.*
                                                List of functions
   Schema   |          Name          | Result data type |              Argument data types               |  Type  
------------+------------------------+------------------+------------------------------------------------+--------
 extensions | difference             | integer          | text, text                                     | normal
 extensions | dmetaphone             | text             | text                                           | normal
 extensions | dmetaphone_alt         | text             | text                                           | normal
 extensions | levenshtein            | integer          | text, text                                     | normal
 extensions | levenshtein            | integer          | text, text, integer, integer, integer          | normal
 extensions | levenshtein_less_equal | integer          | text, text, integer                            | normal
 extensions | levenshtein_less_equal | integer          | text, text, integer, integer, integer, integer | normal
 extensions | metaphone              | text             | text, integer                                  | normal
 extensions | soundex                | text             | text                                           | normal
 extensions | text_soundex           | text             | text                                           | normal
 extensions | uuid_generate_v1       | uuid             |                                                | normal
 extensions | uuid_generate_v1mc     | uuid             |                                                | normal
 extensions | uuid_generate_v3       | uuid             | namespace uuid, name text                      | normal
 extensions | uuid_generate_v4       | uuid             |                                                | normal
 extensions | uuid_generate_v5       | uuid             | namespace uuid, name text                      | normal
 extensions | uuid_nil               | uuid             |                                                | normal
 extensions | uuid_ns_dns            | uuid             |                                                | normal
 extensions | uuid_ns_oid            | uuid             |                                                | normal
 extensions | uuid_ns_url            | uuid             |                                                | normal
 extensions | uuid_ns_x500           | uuid             |                                                | normal
(20 rows)

SQL query

But in many cases I want this information via a SQL query rather than in the psql tool. Stackoverflow provided very useful information as usual. How can I get a list of all functions stored in the database of a particular schema in PostgreSQL?

There’s no need to re-invent something already done. How did psql gather this info? Just ask it using the flag “-E”.

$ psql -E -U semarchy_b2b_tutorial semarchy4
psql (10.1)
Type "help" for help.

semarchy4=> \df extensions.*
********* QUERY **********
SELECT n.nspname as "Schema",
  p.proname as "Name",
  pg_catalog.pg_get_function_result(p.oid) as "Result data type",
  pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types",
 CASE
  WHEN p.proisagg THEN 'agg'
  WHEN p.proiswindow THEN 'window'
  WHEN p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype THEN 'trigger'
  ELSE 'normal'
 END as "Type"
FROM pg_catalog.pg_proc p
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE n.nspname ~ '^(extensions)$'
ORDER BY 1, 2, 4;
**************************

                                                List of functions
   Schema   |          Name          | Result data type |              Argument data types               |  Type  
------------+------------------------+------------------+------------------------------------------------+--------
 extensions | difference             | integer          | text, text                                     | normal
...

That query is quite useful. But I wanted the complete signature of the functions. So it had all of the information I wanted, but it was incomplete in the sense that it was not well formatted. Here’s a simple fix:

SELECT 
  pg_catalog.pg_get_function_result(p.oid) || ' ' || p.proname || '( ' || pg_catalog.pg_get_function_arguments(p.oid) || ' )' as signature,
  n.nspname as "Schema",
  p.proname as "Name",
  pg_catalog.pg_get_function_result(p.oid) as "Result data type",
  pg_catalog.pg_get_function_arguments(p.oid) as "Argument data types",
 CASE
  WHEN p.proisagg THEN 'agg'
  WHEN p.proiswindow THEN 'window'
  WHEN p.prorettype = 'pg_catalog.trigger'::pg_catalog.regtype THEN 'trigger'
  ELSE 'normal'
 END as "Type"
FROM pg_catalog.pg_proc p
     LEFT JOIN pg_catalog.pg_namespace n ON n.oid = p.pronamespace
WHERE n.nspname ~ '^(extensions)$'
ORDER BY 2, 3, 5;

That works fine for what I wanted. But the next answer for that same Stackoverflow question seems even more elegant. I modified it because I wanted the complete signature. (OK, ‘signature’ should not really have the return type. But I needed that info. And the alias ‘signature’ is good enough for me.)

SELECT format('%s %I.%I( %s )', pg_get_function_result(p.oid), ns.nspname, p.proname, pg_get_function_arguments(p.oid)) as signature
FROM            pg_proc p 
INNER JOIN pg_namespace ns ON (p.pronamespace = ns.oid)
WHERE ns.nspname = 'extensions'
ORDER BY p.proname, pg_get_function_arguments(p.oid);

/* Results below */
integer extensions.difference( text, text )
text extensions.dmetaphone( text )
text extensions.dmetaphone_alt( text )
integer extensions.levenshtein( text, text )
integer extensions.levenshtein( text, text, integer, integer, integer )
integer extensions.levenshtein_less_equal( text, text, integer )
integer extensions.levenshtein_less_equal( text, text, integer, integer, integer, integer )
text extensions.metaphone( text, integer )
text extensions.soundex( text )
text extensions.text_soundex( text )
uuid extensions.uuid_generate_v1(  )
uuid extensions.uuid_generate_v1mc(  )
uuid extensions.uuid_generate_v3( namespace uuid, name text )
uuid extensions.uuid_generate_v4(  )
uuid extensions.uuid_generate_v5( namespace uuid, name text )
uuid extensions.uuid_nil(  )
uuid extensions.uuid_ns_dns(  )
uuid extensions.uuid_ns_oid(  )
uuid extensions.uuid_ns_url(  )
uuid extensions.uuid_ns_x500(  )

 

Advertisements

12 December 2014

Free Oracle on 64-bit Windows

Filed under: JasperReports — Tags: , , — mdahlman @ 22:53

Problem

I want to run Oracle for free on my 64-bit Windows machine.

Background

This is a relatively common problem. Oracle is the most popular database in the world. (By certain revenue measures, that is. Clearly it’s not most popular database by pure installation metrics.) Windows is the most popular OS. Nobody buys new machines with 32-bit Windows.

So the combination of the latest version of Oracle with the latest version of Windows seems tremendously useful.

Well tough luck.

Oracle Express Edition exists expressly for folks that want to get some experience with Oracle without paying. Perfect! But while Oracle 12c Enterprise Edition was released in June 2013, there has been no corresponding release of Express Edition as of January 2014. Express is stuck back at 11g. Maybe 11g is good enough to get started. Great! But it doesn’t support 64-bit Windows. No, seriously. (I have to add that ‘seriously’ comment because… seriously? Cutting out enterprise features makes perfect sense to me. But preventing it from running on current OSes just seems ridiculous.) Lots of folks want Oracle XE on 64-bit Windows. Well tough luck.

I posted one year ago to stackoverflow.com. It seemed time to expand this answer to a more detailed article.

Solutions

Use a VM

  • VirtualBox software
  • VirtualBox VMs with Oracle pre-installed
    “Database App Development VM” is a good choice. Everything is pre-configured, and you can be up and running with Oracle extremely quickly. Oracle is running on Oracle Linux… but it’s running on Oracle Linux on VirtualBox on 64-bit Windows. Bonus benefit: your more fortunate friends and colleagues running Mac OS X are free to run run Oracle on Oracle Linux on VirtualBox on their Macs, so everyone can use the same thing.

Install on 64-bit anyway

I’ll at least say this for Oracle: they don’t prevent you from installing on Windows x64. With sufficient elbow grease you can make Oracle XE work on 64-bit Windows.

Develop Only

If your goal is just to test something out or to get familiar with Oracle, then Oracle Enterprise Edition is your solution. It’s free “only for the purpose of developing, testing, prototyping and demonstrating your application”. In lots of situations this is all that you need. And it’s available with a 64-bit Windows installer.

I’m surprised that Oracle doesn’t make these facts easier to track down.

18 July 2014

Hierarchical JSON from Oracle

Filed under: Master Data Management, Oracle — Tags: , , , — mdahlman @ 14:03

Background

Semarchy manages master data hierarchies (corporate structures, product group hierarchies, employee management trees, etc.) easily with out of the box functionality. By this I mean it can validate the data, match up different sources, enrich the data from external systems, manage an audit log of changes and so forth. It’s all great stuff.  But on a recent project  I wanted to display hierarchical data using an intuitive visual interface. A plethora of visualization libraries exist, and I was leaning toward using D3 since it appears to be one of the most polished, most used, and most actively developed at the moment.

Problem

The D3 example I wanted to use is designed to accept data in JSON format. My data is in Oracle, and Oracle doesn’t provide a simple way to generate a complex JSON output.

Likely Solutions

A few people pointed me to plsql-utils, aka Alexandria, as the best starting point. It’s a really useful Oracle resource, and I spent some time investigating this idea. Morten Braten’s 2010 article about it is excellent. But in the end I didn’t find it to be the right tool for this problem. It made it very easy to take a result set and convert it to valid JSON where each row was a JSON record. But this was just tabular data as JSON, so this was not what I needed to feed into the D3 engine. I have no doubt that I could write a stored procedure which could loop through my data to get a more appropriate hierarchical structure and then use plsql-utils to convert this to JSON. But the level of effort required seemed high.

I found several references to PL/JSON. This project shows potential, but it doesn’t appear to be actively developed (as of mid 2014).

There’s an interesting answer at the greatest of all answer sites from Mike Bostock, the author of D3(!). That example is focused on converting comma separated values (CSV) data to JSON. The concepts could be applied here. But my data, though tabular, is not actually CSV. I would prefer to use the D3 sample with the smallest number of changes possible. So I would much prefer to return the data to D3 already JSON-ified if I can.

Then I found Lucas Jellema’s 2011 article about generating a JSON string directly from a query. This presented a more intuitive approach for me. He uses a common table expression (CTE) to easily create a sub-select to gather the hierarchical information along with the LIST_AGG analytic function to present it well. Clever. In the end I didn’t actually use LIST_AGG and I didn’t really use a CTE. (OK, my sample query below has a CTE… but it could be changed into a standard subquery with trivial effort.)

My Solution

In the end I decided to use Oracle’s inherent abilities to handle hierarchical information (mainly the CONNECT BY syntax) and then convert it to JSON with the additional of simple string logic. The key concepts needed in this conversion are:

  • The CONNECT BY query can return the data in a specified logical order.
  • By knowing if the next record is at a higher, lower, or equal level in the hierarchy, we can generate JSON brackets correctly.
  • We can know if the next record is at a higher, lower, or equal level in the hierarchy by using analytic windowing functions like LAG and LEAD.

Here’s the commented SQL used to return the data

WITH connect_by_query as (
  SELECT 
     ROWNUM                               as rnum
    ,FIRST_NAME || ' ' || LAST_NAME       as FULL_NAME
    ,LEVEL                                as Lvl
  FROM GD_EMPLOYEE emp1
  START WITH EMPLOYEE_NUMBER = 100
  CONNECT BY PRIOR EMPLOYEE_NUMBER = F_MANAGER
  ORDER SIBLINGS BY EMPLOYEE_NUMBER
)
select 
  CASE 
    /* the top dog gets a left curly brace to start things off */
    WHEN Lvl = 1 THEN '{'
    /* when the last level is lower (shallower) than the current level, start a "children" array */
    WHEN Lvl - LAG(Lvl) OVER (order by rnum) = 1 THEN ',"children" : [{' 
    ELSE ',{' 
  END 
  || ' "name" : "' || FULL_NAME || '" '
  /* when the next level lower (shallower) than the current level, close a "children" array */
  || CASE WHEN LEAD(Lvl, 1, 1) OVER (order by rnum) - Lvl <= 0 
     THEN '}' || rpad( ' ', 1+ (-2 * (LEAD(Lvl, 1, 1) OVER (order by rnum) - Lvl)), ']}' )
     ELSE NULL 
  END as JSON_SNIPPET
from connect_by_query
order by rnum;

Here’s an example of the data returned (formatting was added afterwards, but the data was returned exactly like this):

{
  "name": "Steven King",
  "children": [{
    "name": "Neena Kochhar",
    "children": [{
      "name": "Nancy Greenberg",
      "children": [{
        "name": "Daniel Faviet"
      }, {
        "name": "John Chen"
      }, {
        "name": "Ismael Sciarra"
      }, {
        "name": "Jose Manuel Urman"
      }, {
        "name": "Luis Popp"
      }]
    }]
  }, {
    "name": "Lex De Haan",
    "children": [{
      "name": "Alexander Hunold",
      "children": [{
        "name": "Bruce Ernst"
      }, {
        "name": "David Austin"
      }, {
        "name": "Valli Pataballa"
      }, {
        "name": "Diana Lorentz"
      }]
    }]
  }, {
    "name": "Den Raphaely",
    "children": [{
      "name": "Alexander Khoo"
    }, {
      "name": "Shelli Baida"
    }, {
      "name": "Sigal Tobias"
    }, {
      "name": "Guy Himuro"
    }, {
      "name": "Karen Colmenares"
    }]
  }]
}

With the data in that form, it was easy to implement this D3 sample inside Semarchy Convergence for MDM:

Employee Hierarchy in Semarchy MDM

The left side shows the standard tree view. Practical.
The right side shows the D3 tree visualization. Awesome.
(And practical in different ways.)

 

Here is the SQL (creates, inserts, and the complete select statement) to try it yourself:

Oracle select query to generate JSON data

 

8 July 2014

MDM in the Cloud (on Amazon AWS Marketplace)

Semarchy MDM on AWS Marketplace


Semarchy shows off its 5 star reviews as the most popular MDM solution on Amazon’s AWS Marketplace

MDM in the Cloud

One of the biggest impediments to Master Data Management (MDM) projects is that they can be hard to get started. An enterprise has lots of people and lots of groups who all stand to benefit from improved data quality, structured data governance, and systematic master data management. But the very fact that so many people stand to gain from it is also a reason why it’s slow to start. Gathering requirements and opinions from everyone takes time.

One of the best ways to get quick agreement about what the scope for the first iteration of an MDM project is to generate a quick proof-of-concept or proof-of-value prototype. And one of the quickest ways to get started on an MDM prototype is by using software that’s completely pre-installed and pre-configured. This can lead to better alignment about what will be possible in an MDM project ensuring that a project will be more successful.

The cloud is a natural fit for this.

Amazon’s AWS Marketplace provides an environment where it’s easy to find software that’s relevant to your needs and get it launched instantly without any up-front costs. When I worked at Jaspersoft I invested quite a bit of time into getting a pre-configured JasperReports Server instance available and in making it easy for people to use Business Intelligence (BI) in the cloud. It was a natural fit especially for anyone who already had data in Amazon RDS or Redshift. The time we invested in that paid off nicely as customers flocked to it. Sales are way up; the reviews are great; and it should serve as a model and an inspiration to other vendors considering cloud offerings.

Semarchy in the Cloud

While business intelligence offerings in the cloud are legion, traditional Master Data Management vendors have been much too slow to embrace the cloud. The industry has taken baby steps. For example, Informatica purchased Data Scout and sells this as their SaaS MDM Salesforce.com plug-in solution. It’s a great utility for salesforce.com, but I don’t put it into the same class as enterprise MDM. Other SaaS MDM solutions are similar.

At Semarchy I see the cloud as an excellent vehicle for putting enterprise MDM into the hands of more users. You can have a fully functional MDM server running in an Amazon Virtual Private Cloud (VPC) in less than an hour. It’s accessible to only people from your company, and it’s ready for you to model your master data management requirements and to start fuzzy-matching and de-duplicating your data.

I expect other vendors to follow eventually. The net result will be improved solutions available to data management professionals everywhere. I’m pleased that Semarchy is leading the way.

5 December 2013

Copy files between s3 buckets

Filed under: AWS, Linux — Tags: , , , , — mdahlman @ 15:06

The problem

I needed to copy files between Amazon AWS S3 buckets. This should be easy. Right?

To be clear, I wanted the equivalent of this:

cp s3://sourceBucket/file_prefix* s3://targetBucket/

The solution (short version)

No, it’s not easy.

Or rather, in the end it turned out to be pretty easy; but it was entirely unintuitive.

s3cmd cp --recursive --exclude=* --include=file_prefix* s3://sourceBucket/ s3://targetBucket/

The explanation (long version)

Get s3cmd

The best command line utility for working with S3 is s3cmd. You can get it from s3tools.org. If you’re on Amazon Linux (or CentOS or RHEL, etc) then this is the easiest way to install it.

# Note the absence of s3tools.repo in your list of repositories like this:
ls /etc/yum.repos.d/
# Put s3tools.repo in your list of repositories like this:
sudo wget http://s3tools.org/repo/RHEL_6/s3tools.repo -O /etc/yum.repos.d/s3tools.repo
# Confirm that you did it correctly:
ls /etc/yum.repos.d/

# Install s3cmd:
sudo yum install s3cmd

# Configure s3cmd:
s3cmd --configure

False start 1

s3cmd has a copy command, “cp”. Try that:

# This should do the trick:
s3cmd s3://sourceBucket/file_prefix* s3://targetBucket/

One file copies successfully… but then it crashes:

File s3://sourceBucket/file_prefix_name1.txt copied to s3://targetBucket/file_prefix_name1.txt

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
  Please report the following lines to:
   s3tools-bugs@lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Problem: KeyError: 'dest_name'
S3cmd:   1.0.0

Traceback (most recent call last):
  File "/usr/bin/s3cmd", line 2006, in 
    main()
  File "/usr/bin/s3cmd", line 1950, in main
    cmd_func(args)
  File "/usr/bin/s3cmd", line 614, in cmd_cp
    subcmd_cp_mv(args, s3.object_copy, "copy", "File %(src)s copied to %(dst)s")
  File "/usr/bin/s3cmd", line 604, in subcmd_cp_mv
    dst_uri = S3Uri(item['dest_name'])
KeyError: 'dest_name'

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
    An unexpected error has occurred.
    Please report the above lines to:
   s3tools-bugs@lists.sourceforge.net
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

Argh!! This stackoverflow answer confirms that s3cmd cp cannot handle this. (It is wrong, but for a long time I believed it.)

False start 2

This stackoverflow answer suggests “sync” as the command to use.

It is correct. But sync is not the same as copy, so this has bad side effects if what you really want to achieve is copying files. For example, sync will remove files in the target folder (to keep things in sync, duh). So syncing from source1 and source2 into a single target will cause grief. For copying all files from one location to another it’s great. I wanted to copy files, and I did not want any of the side effects of sync.

Bad alternatives

You can write your own script using boto and python or muck around with awk and getting lists of files to copy one-by-one. In principle these will work, but yuck.

You could download the files from s3 then put them back up into the intended target bucket. This is a terrible solution. It will succeed… but what a waste of time and bandwidth. What makes it so tempting is that s3cmd works exactly like you want it to work with “get” and “put”.

s3cmd put /localDirectory/file_prefix* s3://targetBucket/

If “put” is so easy, why is “cp” so hard?

Enlightenment

I studied the s3cmd options over and over. Eventually I realized “cp” had more flexibility if you look deep enough.

  • –recursive
    In my mind, my requirement is clearly not recursive. I simple want multiple files. But recursive in this context just tells s3cmd cp to handle multiple files. Great.
  • –exclude
    It’s an odd way to think of the problem. Begin by recursively selecting all files. Next, exclude all files. Wait, what?
  • –include
    Now we’re talking. Indicate the file prefix (or suffix or whatever pattern) that you want to include.
  • s3://sourceBucket/  s3://targetBucket/
    This part is intuitive enough. Though technically it seems to violate the documented example from s3cmd help which indicates that a source object must be specified:
    s3cmd cp s3://BUCKET1/OBJECT1 s3://BUCKET2[/OBJECT2]

I posted a brief version of my answer to the most elegant of technical websites. You should vote it up. But that didn’t seem like the best place to elaborate on the answer as I’ve done here.

Postscript

Amazon offers a command line interface (CLI) tool to do the same thing. AWS Command Line Interface. I swear that I looked extensively and repeatedly for exactly this saying, “I just can’t believe that Amazon wouldn’t have this by now.” Well, they do. I have no idea why I could not find it, but I’m mentioning it here for my own future reference and for anyone else who is using s3cmd as an alternative to the Amazon utility that they couldn’t find.

I have no idea if the Amazon CLI is [ better | worse | different ] than s3cmd in any interesting way regarding S3. (It’s certainly different in the respect that it interacts with many other AWS services besides S3.) If I ever need to compare them, then I’ll write it up.

 

Older Posts »

Blog at WordPress.com.