Skip to main content

New Feature: RGB Augmentation from IGN Orthophotos

ยท 7 min read
Simon Ducournau
Lead Developer & Researcher

We're excited to announce a major new feature for the IGN LiDAR HD Processing Library: RGB Augmentation from IGN Orthophotos! ๐ŸŽจ

This feature automatically enriches your LiDAR point clouds with RGB colors fetched directly from IGN's high-resolution orthophoto service, enabling multi-modal machine learning and enhanced visualization.

What's New?โ€‹

๐ŸŽจ RGB Augmentationโ€‹

Your LiDAR patches can now include RGB color information automatically extracted from IGN BD ORTHOยฎ orthophotos at 20cm resolution:

# Simple command to add RGB colors
ign-lidar-hd patch \
--input enriched_tiles/ \
--output patches/ \
--include-rgb \
--rgb-cache-dir cache/

๐Ÿ“ฆ Renamed Command: process โ†’ patchโ€‹

For better clarity, we've renamed the process command to patch. Don't worry - the old command still works for backwards compatibility!

# New recommended command
ign-lidar-hd patch --input tiles/ --output patches/

# Old command (still works, shows deprecation warning)
ign-lidar-hd process --input tiles/ --output patches/

Why RGB Augmentation?โ€‹

Multi-Modal Machine Learningโ€‹

Combine geometric features with photometric information for better classification:

  • Improved accuracy: Models can learn from both shape and color
  • Better generalization: Color helps disambiguate similar geometries
  • Enhanced features: 30+ geometric features + 3 color channels

Better Visualizationโ€‹

Color-coded point clouds make analysis and debugging much easier:

import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D

# Load patch with RGB
data = np.load('patch.npz')
points = data['points']
rgb = data['rgb'] # Normalized [0, 1]

# Visualize with true colors
fig = plt.figure(figsize=(12, 8))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(points[:, 0], points[:, 1], points[:, 2], c=rgb, s=1)
plt.title('RGB-Augmented Point Cloud')
plt.show()

Zero Manual Workโ€‹

No need to:

  • Download orthophotos manually
  • Align images with point clouds
  • Handle coordinate transformations
  • Manage multiple data sources

Everything is automatic! The library:

  1. Fetches orthophotos from IGN Gรฉoplateforme WMS
  2. Maps 3D points to 2D pixels automatically
  3. Extracts and normalizes RGB values
  4. Caches downloads for performance

How It Worksโ€‹

The system is smart about caching - orthophotos are downloaded once per tile and reused for all patches, making the process fast and efficient.

Performanceโ€‹

Speed Benchmarksโ€‹

ConfigurationTime per PatchNotes
Geometry only0.5-2sBaseline (no RGB)
RGB (cached)0.6-2.5s+0.1-0.5s (minimal impact)
RGB (first download)2-7s+2-5s (one-time cost)

With caching enabled (recommended!), the performance impact is minimal:

  • 10-20x faster than downloading every time
  • ~500KB-2MB cache size per tile
  • Automatic reuse across patches from same tile

Memory Impactโ€‹

RGB augmentation adds only ~196KB per patch (16,384 points ร— 3 colors ร— 4 bytes), which is negligible compared to the geometric features.

Python APIโ€‹

Basic Usageโ€‹

from pathlib import Path
from ign_lidar import LiDARProcessor

# Initialize with RGB support
processor = LiDARProcessor(
lod_level="LOD2",
include_rgb=True,
rgb_cache_dir=Path("cache/")
)

# Process tiles
patches = processor.process_tile("enriched_tile.laz", "output/")

# Each patch now has RGB!
import numpy as np
data = np.load("output/patch_0001.npz")
print(data.keys()) # ['points', 'features', 'labels', 'rgb', 'metadata']
print(data['rgb'].shape) # (N, 3) - normalized RGB [0, 1]

Advanced: Direct RGB Augmentationโ€‹

from ign_lidar.rgb_augmentation import IGNOrthophotoFetcher

# Direct control over RGB fetching
fetcher = IGNOrthophotoFetcher(cache_dir=Path("cache/"))

# Fetch for specific bounding box
bbox = (x_min, y_min, x_max, y_max) # Lambert 93
image = fetcher.fetch_orthophoto(bbox, tile_id="0123_4567")

# Augment points
import numpy as np
points = np.array([[x1, y1, z1], [x2, y2, z2]])
rgb = fetcher.augment_points_with_rgb(points, bbox, tile_id="0123_4567")

Data Specificationsโ€‹

IGN Serviceโ€‹

  • Source: IGN Gรฉoplateforme WMS
  • Layer: HR.ORTHOIMAGERY.ORTHOPHOTOS
  • Resolution: 20cm per pixel
  • CRS: EPSG:2154 (Lambert 93)
  • Coverage: Metropolitan France
  • Format: PNG (24-bit RGB)

Output Formatโ€‹

Each NPZ patch now includes:

{
'points': np.ndarray, # (N, 3) - X, Y, Z
'features': np.ndarray, # (N, 30+) - Geometric features
'labels': np.ndarray, # (N,) - Classification labels
'rgb': np.ndarray, # (N, 3) - RGB colors [0, 1]
'metadata': dict # Patch info
}

Getting Startedโ€‹

Installationโ€‹

The RGB feature requires two additional packages:

pip install requests Pillow

# Or install with extras
pip install ign-lidar-hd[rgb]

Quick Startโ€‹

# 1. Download tiles (as usual)
ign-lidar-hd download --bbox -2.0,47.0,-1.0,48.0 --output tiles/

# 2. Enrich with features (as usual)
ign-lidar-hd enrich --input-dir tiles/ --output enriched/

# 3. Create patches WITH RGB (new!)
ign-lidar-hd patch \
--input-dir enriched/ \
--output patches/ \
--include-rgb \
--rgb-cache-dir cache/ \
--lod-level LOD2

Best Practicesโ€‹

1. Always Use Cachingโ€‹

# โœ… Good: With cache (fast)
ign-lidar-hd patch --include-rgb --rgb-cache-dir cache/

# โŒ Slow: No cache (downloads repeatedly)
ign-lidar-hd patch --include-rgb

Caching makes RGB augmentation 10-20x faster for subsequent patches.

2. Check RGB Statisticsโ€‹

import numpy as np

data = np.load('patch.npz')
rgb = data['rgb']

print(f"RGB mean: {rgb.mean(axis=0)}")
print(f"RGB std: {rgb.std(axis=0)}")
print(f"RGB range: [{rgb.min():.3f}, {rgb.max():.3f}]")

# Values should be in [0, 1]
assert rgb.min() >= 0 and rgb.max() <= 1, "RGB not normalized!"

3. Handle Missing Dependencies Gracefullyโ€‹

try:
from ign_lidar.rgb_augmentation import IGNOrthophotoFetcher
HAS_RGB = True
except ImportError:
HAS_RGB = False
print("Install 'requests' and 'Pillow' for RGB support")

processor = LiDARProcessor(
include_rgb=HAS_RGB,
rgb_cache_dir=Path("cache/") if HAS_RGB else None
)

Backwards Compatibilityโ€‹

Everything still works as before!

  • RGB augmentation is opt-in via --include-rgb flag
  • Default behavior unchanged (no RGB)
  • Old process command still works (with deprecation warning)
  • Existing scripts continue to function without modifications

Migration Guideโ€‹

Command Lineโ€‹

# Old workflow (still works)
ign-lidar-hd process --input tiles/ --output patches/

# New workflow (recommended)
ign-lidar-hd patch --input tiles/ --output patches/

# New workflow with RGB
ign-lidar-hd patch --input tiles/ --output patches/ --include-rgb

Python APIโ€‹

# Old code (still works)
processor = LiDARProcessor(lod_level="LOD2")
patches = processor.process_tile("tile.laz", "output/")

# New with RGB (opt-in)
processor = LiDARProcessor(
lod_level="LOD2",
include_rgb=True, # NEW!
rgb_cache_dir=Path("cache/") # NEW!
)
patches = processor.process_tile("tile.laz", "output/")

Use Casesโ€‹

1. Building Material Classificationโ€‹

Use color to distinguish building materials:

  • Brick walls: Red/brown tones
  • Concrete: Gray tones
  • Glass windows: Reflective/variable colors
  • Vegetation: Green tones

2. Quality Controlโ€‹

Identify misalignments or processing errors by checking if colors match expected semantics:

data = np.load('patch.npz')
rgb = data['rgb']
labels = data['labels']

# Roofs should be dark (tiles) or light (metal)
roof_colors = rgb[labels == ROOF_CLASS]
print(f"Roof colors: R={roof_colors[:, 0].mean():.2f}")

3. Multi-Modal Deep Learningโ€‹

Train models on both geometry and photometry:

import torch
import torch.nn as nn

class MultiModalNet(nn.Module):
def __init__(self):
super().__init__()
self.geom_branch = nn.Linear(30, 64)
self.rgb_branch = nn.Linear(3, 16)
self.classifier = nn.Linear(80, 15)

def forward(self, geometry, rgb):
g = self.geom_branch(geometry)
r = self.rgb_branch(rgb)
combined = torch.cat([g, r], dim=-1)
return self.classifier(combined)

Documentationโ€‹

Full documentation available:

What's Next?โ€‹

We're continuously improving the library. Upcoming features:

  • ๐Ÿ”ง Support for custom orthophoto sources
  • ๐Ÿ“Š RGB-based feature engineering
  • ๐ŸŽฏ Pre-trained multi-modal models
  • ๐ŸŒ Support for other WMS services

Get Involvedโ€‹

Try It Now!โ€‹

# Install/upgrade
pip install --upgrade ign-lidar-hd

# Install RGB dependencies
pip install requests Pillow

# Start using RGB augmentation
ign-lidar-hd patch \
--input your_tiles/ \
--output patches/ \
--include-rgb \
--rgb-cache-dir cache/

Happy processing! ๐Ÿš€โœจ