Troubleshooting Guide¶
Comprehensive solutions for common pycubrid issues — connection errors, query problems, type mismatches, LOB handling, performance tuning, and Docker setup.
Table of Contents¶
- Connection Issues
- ConnectionRefusedError on Port 33000
- Authentication Failed
- TimeoutError or socket.timeout
- Connection Closed Unexpectedly
- Broker Port Redirect Failure
- Query Issues
- ProgrammingError: SQL Syntax
- Parameter Binding Errors
- Wrong Number of Parameters
- Reserved Word Conflicts
- Empty Result Set
- Transaction Issues
- Data Not Persisted After Insert
- Autocommit Behavior
- Deadlocks
- Type Mapping Issues
- Date/Time Handling
- Decimal Precision Loss
- NULL Handling
- Boolean Values
- Unicode / NCHAR Encoding
- LOB (CLOB/BLOB) Issues
- LOB Columns Return a Dict, Not Data
- Cannot Pass Lob Object as Parameter
- LOB Size Limits
- Cursor Issues
- InterfaceError: Cursor is Closed
- fetchone() Returns None Unexpectedly
- rowcount Is -1 After SELECT
- executemany() Performance
- Prepared Statement Issues
- execute(sql, params) Pattern
- Mixing Parameterized and Direct Execution
- Docker Issues
- Container Starts but Cannot Connect
- Database Not Found
- Container Health Check
- SQLAlchemy Integration Issues
- Wrong Connection URL Format
- Autocommit Conflicts
- Connection Pool Exhaustion
- Performance Issues
- Slow Queries
- High Memory Usage
- Connection Overhead
- Debugging Techniques
Troubleshooting Decision Tree¶
mermaid
flowchart TD
A[Start: pycubrid error observed] --> B{Connection established?}
B -->|No| C[Check broker status and port reachability]
C --> D{Authentication error?}
D -->|Yes| E[Verify user/password and database]
D -->|No| F[Set connect_timeout and inspect network/firewall]
B -->|Yes| G{Query or transaction failure?}
G -->|Query| H[Validate SQL syntax and placeholder count]
G -->|Transaction| I[Confirm autocommit and explicit commit/rollback]
G -->|LOB| J[Validate LOB type and read/write flow]
H --> K[Catch specific pycubrid exceptions]
I --> K
J --> K
E --> K
F --> K
K --> L[Use debug script and logs, then escalate with reproducible case]
Tip
Start from the first failing operation (connect, execute, fetch, or commit) and isolate one variable at a time.
Note
Most production failures can be classified quickly into one of four buckets: connectivity, authentication, SQL/binding, or transaction state.
Connection Issues¶
ConnectionRefusedError on Port 33000¶
Symptom:
Causes and fixes:
Warning
CUBRID startup is asynchronous in Docker-based setups. A successful container start does not always mean the broker is already accepting connections.
- CUBRID broker is not running
- Wrong port — The broker may be configured on a different port.
- Docker container not ready — The CUBRID container takes a few seconds to initialize.
# Check container status
docker compose ps
# Wait for health check
docker compose up -d
sleep 5 # Wait for broker initialization
# Verify with logs
docker compose logs cubrid | tail -20
- Firewall or network — Port 33000 may be blocked.
Authentication Failed¶
Symptom:
Fixes:
CUBRID's default dba user has no password. If you set a password, make sure it matches:
# Default — no password
conn = pycubrid.connect(
host="localhost",
port=33000,
database="testdb",
user="dba",
)
# With password
conn = pycubrid.connect(
host="localhost",
port=33000,
database="testdb",
user="dba",
password="your_password",
)
Common mistakes:
- Passing password="" when the user has a password set
- Passing a password when the user has no password (some CUBRID versions reject this)
- Wrong username — CUBRID usernames are case-insensitive but must exist
TimeoutError or socket.timeout¶
Symptom:
Fixes:
- Increase timeout for slow networks:
conn = pycubrid.connect(
host="remote-server.example.com",
port=33000,
database="testdb",
user="dba",
connect_timeout=30.0, # 30-second timeout
)
- Verify server is reachable:
- Check for network firewalls between client and server.
Connection Closed Unexpectedly¶
Symptom:
Causes:
- Server-side session timeout — CUBRID broker has a
SESSION_TIMEOUTsetting. Default is 300 seconds (5 minutes) of inactivity. - Broker restart — If the broker restarts, all existing connections are terminated.
- Network interruption — Temporary network failure drops the TCP connection.
- Idle connection cleanup — The broker may close idle connections to free resources.
Fix: Create a new connection when this error occurs:
import pycubrid
def get_connection():
return pycubrid.connect(
host="localhost",
port=33000,
database="testdb",
user="dba",
)
conn = get_connection()
try:
cur = conn.cursor()
cur.execute("SELECT 1")
except pycubrid.OperationalError:
# Reconnect on connection loss
conn = get_connection()
cur = conn.cursor()
cur.execute("SELECT 1")
For long-running applications, use SQLAlchemy with connection pooling — it handles reconnection automatically:
from sqlalchemy import create_engine
engine = create_engine(
"cubrid+pycubrid://dba@localhost:33000/testdb",
pool_pre_ping=True, # Test connection before use
pool_recycle=1800, # Recycle connections every 30 minutes
)
Broker Port Redirect Failure¶
Symptom:
Background: When pycubrid connects to port 33000, the CUBRID broker may redirect the connection to a different CAS (CUBRID Application Server) port. If the redirect port is unreachable, the connection fails.
Fix:
- Check CAS processes are running:
- Ensure all CAS ports are reachable — If using Docker with port forwarding, only port 33000 may be exposed. When the broker redirects to a different port, the connection fails if that port is not forwarded.
Docker fix — Expose a range of ports, or configure the broker to reuse the connection (port 0 mode):
# docker-compose.yml
services:
cubrid:
image: cubrid/cubrid:11.2
ports:
- "33000:33000"
environment:
CUBRID_DB: testdb
The default Docker image is configured correctly for single-port access. If you see this error with Docker, check that you're not overriding broker configuration.
Query Issues¶
ProgrammingError: SQL Syntax¶
Symptom:
Common causes:
- Using MySQL/PostgreSQL-specific syntax — CUBRID has its own SQL dialect:
# WRONG — CUBRID doesn't support LIMIT with comma syntax
cur.execute("SELECT * FROM users LIMIT 0, 10")
# CORRECT — use LIMIT with OFFSET
cur.execute("SELECT * FROM users LIMIT 10 OFFSET 0")
- Using reserved words as identifiers — Quote them with double quotes:
# WRONG — 'value' is a reserved word
cur.execute("SELECT value FROM config")
# CORRECT — quote the identifier
cur.execute('SELECT "value" FROM config')
# BETTER — avoid reserved words
cur.execute("SELECT val FROM config")
- Missing semicolons are fine — pycubrid does not require trailing semicolons (and they may cause errors in some contexts).
Parameter Binding Errors¶
Symptom:
Danger
pycubrid supports qmark placeholders (?) only. Mixing %s, :name, or f-string SQL construction often causes subtle runtime errors or SQL injection risk.
pycubrid uses qmark paramstyle (question marks). Do not use named parameters or format strings:
# CORRECT — qmark style
cur.execute("SELECT * FROM users WHERE name = ? AND age > ?", ("Alice", 25))
# WRONG — named parameters (not supported)
cur.execute("SELECT * FROM users WHERE name = :name", {"name": "Alice"})
# WRONG — format string (SQL injection risk!)
cur.execute(f"SELECT * FROM users WHERE name = '{name}'")
# WRONG — %s style (not supported)
cur.execute("SELECT * FROM users WHERE name = %s", ("Alice",))
Supported Python types for parameters:
| Python Type | SQL Result |
|---|---|
None |
NULL |
bool |
1 or 0 |
int, float |
Numeric literal |
Decimal |
Numeric literal |
str |
'escaped_string' |
bytes |
X'hex_string' |
datetime.date |
DATE'YYYY-MM-DD' |
datetime.time |
TIME'HH:MM:SS' |
datetime.datetime |
DATETIME'YYYY-MM-DD HH:MM:SS.mmm' |
Wrong Number of Parameters¶
Symptom:
Fix: Ensure the number of ? placeholders matches the number of parameters:
# WRONG — 2 placeholders, 1 parameter
cur.execute("INSERT INTO users (name, age) VALUES (?, ?)", ("Alice",))
# CORRECT — 2 placeholders, 2 parameters
cur.execute("INSERT INTO users (name, age) VALUES (?, ?)", ("Alice", 30))
For single parameters, pass a tuple (not a bare value):
# WRONG — string is iterable, each character becomes a parameter
cur.execute("SELECT * FROM users WHERE name = ?", "Alice")
# CORRECT — wrap in a tuple
cur.execute("SELECT * FROM users WHERE name = ?", ("Alice",))
Reserved Word Conflicts¶
Common CUBRID reserved words that often clash with column/table names:
| Reserved Word | Safe Alternative |
|---|---|
value |
val, item_value |
count |
cnt, item_count |
data |
file_data, raw_data |
level |
user_level, access_level |
name |
Usually OK, but check if issues occur |
status |
item_status |
type |
item_type |
action |
user_action |
To use reserved words as identifiers, quote them with double quotes:
cur.execute('CREATE TABLE "order" (id INT, "value" VARCHAR(100))')
cur.execute('SELECT "value" FROM "order"')
Empty Result Set¶
Symptom: fetchone() returns None or fetchall() returns [] when you expect data.
Common causes:
- Uncommitted INSERT — data was inserted but not committed:
cur.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
conn.commit() # Don't forget this!
cur.execute("SELECT * FROM users WHERE name = ?", ("Alice",))
print(cur.fetchall())
-
Different connection — each connection has its own transaction view. Uncommitted data in one connection is not visible in another.
-
Case sensitivity — CUBRID string comparison is case-sensitive by default:
# These return different results
cur.execute("SELECT * FROM users WHERE name = ?", ("alice",))
cur.execute("SELECT * FROM users WHERE name = ?", ("Alice",))
Transaction Issues¶
Data Not Persisted After Insert¶
Symptom: Data is inserted successfully (no error), but a subsequent query from a different connection or after reconnection shows no data.
Cause: autocommit is False by default in pycubrid when using the constructor directly. You must call conn.commit() explicitly.
conn = pycubrid.connect(host="localhost", port=33000, database="testdb", user="dba")
cur = conn.cursor()
cur.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
conn.commit() # Required! Without this, data is lost on close
conn.close()
Or use the context manager which auto-commits on success:
with pycubrid.connect(host="localhost", port=33000, database="testdb", user="dba") as conn:
cur = conn.cursor()
cur.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
# Auto-commits on successful exit
Autocommit Behavior¶
Symptom: Unexpected commit or rollback behavior.
Key facts:
| Scenario | autocommit | Behavior |
|---|---|---|
| Default constructor | False (driver default) |
Explicit commit() required |
| Via SQLAlchemy | False (dialect sets it) |
SQLAlchemy manages transactions |
| Context manager exit | N/A | Commits on success, rollbacks on exception |
To switch modes:
# Check current mode
print(conn.autocommit) # False
# Disable for manual transaction control
conn.autocommit = False
cur.execute("INSERT INTO users (name) VALUES (?)", ("Alice",))
cur.execute("INSERT INTO users (name) VALUES (?)", ("Bob",))
conn.commit() # Both inserts committed together
Deadlocks¶
Symptom:
CUBRID uses row-level locking. Deadlocks occur when two connections hold locks that each other needs.
Prevention:
- Keep transactions short
- Access tables in a consistent order
- Use
SELECT ... FOR UPDATEto lock rows upfront - Set appropriate isolation levels
# Lock rows before updating to prevent deadlocks
cur.execute("SELECT * FROM accounts WHERE id = ? FOR UPDATE", (1,))
cur.execute("UPDATE accounts SET balance = balance - 100 WHERE id = ?", (1,))
conn.commit()
Type Mapping Issues¶
Date/Time Handling¶
CUBRID type → Python type mapping:
| CUBRID Type | Python Type | Example |
|---|---|---|
DATE |
datetime.date |
date(2025, 1, 15) |
TIME |
datetime.time |
time(14, 30, 0) |
DATETIME |
datetime.datetime |
datetime(2025, 1, 15, 14, 30, 0) |
TIMESTAMP |
datetime.datetime |
datetime(2025, 1, 15, 14, 30, 0) |
Common issue — inserting date strings:
# CORRECT — use Python datetime objects
from datetime import date, datetime
cur.execute("INSERT INTO events (event_date) VALUES (?)", (date(2025, 1, 15),))
cur.execute("INSERT INTO events (event_time) VALUES (?)", (datetime(2025, 1, 15, 14, 30, 0),))
# ALSO CORRECT — CUBRID accepts date literal strings in SQL
cur.execute("INSERT INTO events (event_date) VALUES (DATE'2025-01-15')")
Decimal Precision Loss¶
Symptom: Decimal values lose precision when inserted or retrieved.
Fix: Use decimal.Decimal for exact numeric values:
from decimal import Decimal
# CORRECT — preserves precision
cur.execute("INSERT INTO products (price) VALUES (?)", (Decimal("19.99"),))
# RISKY — float has inherent precision issues
cur.execute("INSERT INTO products (price) VALUES (?)", (19.99,))
NULL Handling¶
Inserting NULL:
Checking for NULL in results:
cur.execute("SELECT email FROM users")
row = cur.fetchone()
if row[0] is None:
print("Email is NULL")
Boolean Values¶
CUBRID has no native BOOLEAN type. Use SMALLINT (0/1):
# Insert boolean-like values
cur.execute("INSERT INTO settings (is_active) VALUES (?)", (True,)) # Stored as 1
cur.execute("INSERT INTO settings (is_active) VALUES (?)", (False,)) # Stored as 0
# Read boolean-like values
cur.execute("SELECT is_active FROM settings")
row = cur.fetchone()
is_active = bool(row[0]) # Convert SMALLINT back to bool
Unicode / NCHAR Encoding¶
CUBRID supports Unicode through NCHAR and NCHAR VARYING types. pycubrid handles UTF-8 encoding transparently:
# Unicode strings work directly
cur.execute("INSERT INTO users (name) VALUES (?)", ("김영선",))
cur.execute("INSERT INTO users (name) VALUES (?)", ("日本語テスト",))
cur.execute("SELECT name FROM users")
for row in cur:
print(row[0]) # Prints correctly: 김영선, 日本語テスト
LOB (CLOB/BLOB) Issues¶
LOB Columns Return a Dict, Not Data¶
Symptom: Fetching a CLOB/BLOB column returns a dictionary instead of the actual data.
cur.execute("SELECT clob_col FROM my_table")
row = cur.fetchone()
print(row[0])
# {'lob_type': 24, 'lob_length': 1234, 'file_locator': '...', 'packed_lob_handle': b'...'}
This is expected behavior. CUBRID's CAS protocol returns LOB metadata, not the LOB content inline. To read LOB content, you need to use the LOB handle separately.
Workaround — insert and retrieve as regular strings/bytes:
# Insert string directly into CLOB column
cur.execute("INSERT INTO docs (content) VALUES (?)", ("Large text content here...",))
conn.commit()
# Fetches return the metadata dict for CLOB/BLOB columns; use the packed handle to read content
Cannot Pass Lob Object as Parameter¶
Symptom:
Lob objects cannot be used as query parameters. Insert strings/bytes directly:
# WRONG — Lob objects cannot be passed as parameters
lob = conn.create_lob(24) # CLOB
lob.write(b"data")
cur.execute("INSERT INTO docs (content) VALUES (?)", (lob,)) # ERROR!
# CORRECT — pass string directly
cur.execute("INSERT INTO docs (content) VALUES (?)", ("Large text content",))
# CORRECT — pass bytes for BLOB
cur.execute("INSERT INTO docs (binary_data) VALUES (?)", (b"\x89PNG\r\n...",))
LOB Size Limits¶
CUBRID LOB size limits depend on the server configuration. The default maximum is typically sufficient for most use cases, but extremely large objects may need server-side configuration adjustments.
For files larger than a few megabytes, consider:
- Storing file paths in the database instead of file content
- Breaking large content into chunks
- Using CUBRID's file storage configuration options
Cursor Issues¶
InterfaceError: Cursor is Closed¶
Symptom:
Causes:
- Explicitly closed cursor — you called
cur.close()then tried to use it again - Connection closed — closing a connection closes all its cursors
- Context manager exited —
with conn.cursor() as cur:closes the cursor on exit
Fix: Create a new cursor:
fetchone() Returns None Unexpectedly¶
Possible causes:
- No rows in result set — the query returned 0 rows
- Already consumed — previous
fetchone()orfetchall()consumed all rows - Non-SELECT statement —
INSERT,UPDATE,DELETEdon't produce rows
cur.execute("SELECT * FROM users")
row1 = cur.fetchone() # First row or None
row2 = cur.fetchone() # Second row or None
# ... continues until None (no more rows)
To re-read results, execute the query again:
cur.execute("SELECT * FROM users")
all_rows = cur.fetchall() # Get all at once
# cur.fetchone() would now return None — results already consumed
rowcount Is -1 After SELECT¶
This is correct PEP 249 behavior. rowcount is only meaningful for INSERT, UPDATE, DELETE statements:
cur.execute("SELECT * FROM users")
print(cur.rowcount) # -1 (undefined for SELECT)
cur.execute("UPDATE users SET name = 'Bob' WHERE id = 1")
print(cur.rowcount) # 1 (one row affected)
cur.execute("DELETE FROM users WHERE id > 100")
print(cur.rowcount) # Number of deleted rows
executemany() Performance¶
For bulk inserts and other non-SELECT DML, executemany() does not execute each parameter set as a separate round trip. It renders each bound statement and sends the full batch in one BatchExecutePacket. Only SELECT statements fall back to the per-parameter loop to preserve result-set semantics. Use executemany_batch() when you already have distinct SQL strings and want to send them in one batch request:
# Standard executemany — non-SELECT DML batches into one request
data = [("Alice", 30), ("Bob", 25), ("Charlie", 35)]
cur.executemany("INSERT INTO users (name, age) VALUES (?, ?)", data)
# executemany_batch — sends multiple statements in one request
sql_list = [
"INSERT INTO users (name, age) VALUES ('Alice', 30)",
"INSERT INTO users (name, age) VALUES ('Bob', 25)",
"INSERT INTO users (name, age) VALUES ('Charlie', 35)",
]
cur.executemany_batch(sql_list)
Performance comparison:
| Method | Round Trips | Best For |
|---|---|---|
execute() in loop |
N | Few rows |
executemany() for INSERT/UPDATE/DELETE |
1 | Parameterized bulk DML |
executemany() for SELECT |
N | Repeated SELECTs with separate parameter sets |
executemany_batch() |
1 | Many distinct SQL statements |
Prepared Statement Issues¶
execute(sql, params) Pattern¶
Correct pattern:
sql = "SELECT * FROM users WHERE department = ?"
# Execute with SQL + parameters
cur.execute(sql, ("Engineering",))
engineers = cur.fetchall()
cur.execute(sql, ("Marketing",))
marketers = cur.fetchall()
Key points:
- Always pass the SQL string as the first argument to
execute() - Pass parameter values in the second argument
- Each call uses CAS
PREPARE_AND_EXECUTE; no separate prepare step is needed
Mixing Parameterized and Direct Execution¶
You can safely mix parameterized and direct SQL execution on one cursor:
cur.execute("SELECT * FROM users WHERE id = ?", (1,))
cur.execute("SELECT * FROM departments")
cur.execute("SELECT * FROM users WHERE id = ?", (2,))
Best practice: Keep SQL explicit at each call site:
sql = "SELECT * FROM users WHERE id = ?"
cur.execute(sql, (1,))
cur.execute(sql, (2,))
cur.execute("SELECT COUNT(*) FROM users")
Docker Issues¶
Container Starts but Cannot Connect¶
Check 1: Container is actually running:
Check 2: Wait for initialization — CUBRID takes a few seconds to start:
docker compose up -d
sleep 10 # Wait for full initialization
# Test connection
python3 -c "
import pycubrid
conn = pycubrid.connect(host='localhost', port=33000, database='testdb', user='dba')
print('Connected!')
print('Version:', conn.get_server_version())
conn.close()
"
Check 3: Port mapping is correct:
Database Not Found¶
Symptom:
The Docker image creates only the database specified in CUBRID_DB:
# docker-compose.yml
services:
cubrid:
image: cubrid/cubrid:11.2
environment:
CUBRID_DB: testdb # Only this database is created
Fix: Either:
1. Set CUBRID_DB to match your connection's database name
2. Create the database manually inside the container:
Container Health Check¶
Add a health check to your docker-compose.yml:
services:
cubrid:
image: cubrid/cubrid:11.2
container_name: cubrid-test
ports:
- "33000:33000"
environment:
CUBRID_DB: testdb
healthcheck:
test: ["CMD", "cubrid", "broker", "status"]
interval: 10s
timeout: 5s
retries: 5
start_period: 15s
Wait for health check in tests:
SQLAlchemy Integration Issues¶
Wrong Connection URL Format¶
Correct URL format for pycubrid:
# pycubrid driver
engine = create_engine("cubrid+pycubrid://dba@localhost:33000/testdb")
# With password
engine = create_engine("cubrid+pycubrid://dba:password@localhost:33000/testdb")
Common mistakes:
# WRONG — missing driver specification (defaults to C-extension driver)
engine = create_engine("cubrid://dba@localhost:33000/testdb")
# WRONG — wrong port format
engine = create_engine("cubrid+pycubrid://dba@localhost/testdb?port=33000")
# WRONG — wrong scheme
engine = create_engine("pycubrid://dba@localhost:33000/testdb")
Autocommit Conflicts¶
Symptom: Data is committed even though you haven't called session.commit().
Cause: The CUBRID server default is autocommit=True. SQLAlchemy's pycubrid dialect sets autocommit=False on each new connection, but if the dialect is misconfigured, the server default takes effect.
Fix: Ensure you're using cubrid+pycubrid:// in the connection URL, which loads the correct dialect that manages autocommit properly.
Connection Pool Exhaustion¶
Symptom:
Fix: Tune the connection pool:
from sqlalchemy import create_engine
engine = create_engine(
"cubrid+pycubrid://dba@localhost:33000/testdb",
pool_size=10, # Maximum persistent connections
max_overflow=20, # Additional connections beyond pool_size
pool_timeout=30, # Seconds to wait for available connection
pool_pre_ping=True, # Test connections before use
pool_recycle=1800, # Recycle connections every 30 minutes
)
Ensure connections are returned to the pool:
# CORRECT — context manager returns connection
with engine.connect() as conn:
result = conn.execute(text("SELECT 1"))
# WRONG — connection never returned
conn = engine.connect()
result = conn.execute(text("SELECT 1"))
# conn.close() is never called!
Performance Issues¶
Slow Queries¶
Diagnostic steps:
- Check query execution time in your application code:
import time
start = time.perf_counter()
cur.execute("SELECT * FROM large_table WHERE status = ?", ("active",))
rows = cur.fetchall()
elapsed = time.perf_counter() - start
print(f"Query took {elapsed:.3f}s, returned {len(rows)} rows")
- Add indexes for frequently queried columns:
- Use
LIMITto restrict result set size:
High Memory Usage¶
Symptom: Python process consumes excessive memory with large result sets.
Cause: fetchall() loads all rows into memory at once.
Fix: Use fetchone() or fetchmany() for large result sets:
# WRONG — loads all 1 million rows into memory
cur.execute("SELECT * FROM large_table")
rows = cur.fetchall() # 1M rows in memory!
# CORRECT — process one row at a time
cur.execute("SELECT * FROM large_table")
for row in cur: # Iterator protocol — fetches in batches
process(row)
# ALSO CORRECT — fetch in chunks
cur.execute("SELECT * FROM large_table")
while True:
batch = cur.fetchmany(1000)
if not batch:
break
for row in batch:
process(row)
Connection Overhead¶
Symptom: Opening connections is slow.
Cause: Each pycubrid.connect() performs a TCP handshake + CAS broker handshake + database open (3+ round trips).
Fix for applications: Use SQLAlchemy connection pooling:
from sqlalchemy import create_engine
# Connection pool reuses existing connections
engine = create_engine(
"cubrid+pycubrid://dba@localhost:33000/testdb",
pool_size=5,
pool_pre_ping=True,
)
Fix for scripts: Reuse a single connection instead of opening/closing repeatedly.
Debugging Techniques¶
Enable Verbose Logging¶
pycubrid includes opt-in DEBUG logging in pycubrid.connection, pycubrid.cursor,
pycubrid.lob, and the async modules. Enable it through Python's logging configuration:
import logging
import pycubrid
logging.basicConfig(level=logging.DEBUG)
conn = pycubrid.connect(host="localhost", port=33000, database="testdb", user="dba")
# Check connection state
print(f"Server version: {conn.get_server_version()}")
print(f"Autocommit: {conn.autocommit}")
# Check cursor state after query
cur = conn.cursor()
cur.execute("SELECT * FROM users")
print(f"Description: {cur.description}")
print(f"Row count: {cur.rowcount}")
The driver's debug logs intentionally avoid printing bound parameter values.
Inspect Server Version¶
conn = pycubrid.connect(host="localhost", port=33000, database="testdb", user="dba")
version = conn.get_server_version()
print(f"CUBRID version: {version}") # e.g., "11.2.0.0378"
conn.close()
Test Connection Script¶
Save this as test_connection.py for quick verification:
#!/usr/bin/env python3
"""Quick pycubrid connection test."""
import sys
import pycubrid
try:
conn = pycubrid.connect(
host="localhost",
port=33000,
database="testdb",
user="dba",
)
print(f"✅ Connected to CUBRID {conn.get_server_version()}")
cur = conn.cursor()
cur.execute("SELECT 1 + 1")
result = cur.fetchone()
print(f"✅ Query result: {result[0]}")
cur.execute("SELECT COUNT(*) FROM db_class")
count = cur.fetchone()[0]
print(f"✅ System tables: {count}")
cur.close()
conn.close()
print("✅ All checks passed")
except pycubrid.OperationalError as e:
print(f"❌ Connection failed: {e}")
sys.exit(1)
except pycubrid.ProgrammingError as e:
print(f"❌ Query failed: {e}")
sys.exit(1)
SQLAlchemy Debug Logging¶
import logging
logging.basicConfig()
logging.getLogger("sqlalchemy.engine").setLevel(logging.DEBUG)
engine = create_engine("cubrid+pycubrid://dba@localhost:33000/testdb", echo=True)
This shows all SQL statements, parameters, and execution times.
See also: Connection Guide · API Reference · Examples · Development