Vibe Coding Forem

Y.C Lee
Y.C Lee

Posted on

Greenplum for Semiconductor Wafer Manufacturing

Create a comprehensive system design that leverages Greenplum as the central data platform for semiconductor wafer processing, integrating all the manufacturing execution and control systems you mentioned.

Greenplum Integration Diagram

Data Flow Sequence

Data Architecture and Schema Design

Deployment Architecture

Technical Specifications - Semiconductor Manufacturing Data Platform

System Performance Requirements

Data Volume Specifications

Data Source Volume/Day Peak Rate Retention Storage Tier
Equipment Sensors (FDC) 50 TB 100K msgs/sec 90 days hot, 2 years warm NVMe SSD
Process Parameters (APC/SPC) 5 TB 10K msgs/sec 1 year hot, 5 years warm SSD/HDD Hybrid
Production Events (MES) 1 TB 1K msgs/sec 3 years hot, 10 years cold SSD/HDD
Test Results (CP/WAT) 10 TB 5K msgs/sec 2 years hot, 10 years warm SSD
Defect Data 20 TB 2K msgs/sec 1 year hot, 5 years warm SSD
Total Daily Volume 86 TB 118K msgs/sec

Performance Targets

Metric Target Measurement
Query Response Time <5 seconds 95th percentile for dashboard queries
Data Ingestion Latency <30 seconds End-to-end from equipment to Greenplum
System Availability 99.9% 8.76 hours downtime/year maximum
Concurrent Users 500+ Simultaneous dashboard users
Analytics Processing <1 hour Complex yield analysis queries
Backup/Recovery RTO <4 hours Recovery Time Objective
Backup/Recovery RPO <15 minutes Recovery Point Objective

Greenplum Cluster Configuration

Hardware Specifications

Master Nodes (2x - Active/Standby)

Hardware:
  Model: Dell PowerEdge R760
  CPU: 2x Intel Xeon Platinum 8358 (32 cores, 2.6GHz)
  Memory: 512GB DDR4-3200
  Storage: 4x 1.92TB NVMe SSD (RAID 10)
  Network: 2x 25GbE + 2x 100GbE

Configuration:
  OS: Red Hat Enterprise Linux 8.6
  Greenplum: Version 7.0
  PostgreSQL: Version 14.x base
  Memory Settings:
    shared_buffers: 128GB
    work_mem: 1GB
    maintenance_work_mem: 8GB
    max_connections: 1000
Enter fullscreen mode Exit fullscreen mode

Segment Nodes - Hot Tier (8x nodes)

Hardware:
  Model: Dell PowerEdge R750
  CPU: 2x Intel Xeon Gold 6338 (32 cores, 2.0GHz)  
  Memory: 1TB DDR4-3200
  Storage: 8x 3.84TB NVMe SSD (RAID 10)
  Network: 2x 25GbE + 2x 100GbE

Configuration:
  Segments per Node: 4 primary + 4 mirror
  Total Segments: 32 primary + 32 mirror
  Memory per Segment: 120GB
  Storage per Segment: 12TB usable
Enter fullscreen mode Exit fullscreen mode

Segment Nodes - Warm Tier (8x nodes)

Hardware:
  Model: Dell PowerEdge R750
  CPU: 2x Intel Xeon Silver 4314 (16 cores, 2.4GHz)
  Memory: 512GB DDR4-3200  
  Storage: 4x 7.68TB SSD + 8x 10TB HDD (RAID 6)
  Network: 2x 25GbE

Configuration:
  Segments per Node: 2 primary + 2 mirror
  Total Segments: 16 primary + 16 mirror
  Memory per Segment: 240GB
  Storage per Segment: 60TB usable
Enter fullscreen mode Exit fullscreen mode

Segment Nodes - Cold Tier (8x nodes)

Hardware:
  Model: Dell PowerEdge R740
  CPU: 2x Intel Xeon Silver 4210R (10 cores, 2.4GHz)
  Memory: 256GB DDR4-2933
  Storage: 12x 18TB HDD (RAID 6)
  Network: 2x 10GbE

Configuration:
  Segments per Node: 1 primary + 1 mirror
  Total Segments: 8 primary + 8 mirror  
  Memory per Segment: 240GB
  Storage per Segment: 150TB usable
Enter fullscreen mode Exit fullscreen mode

Greenplum Optimization Parameters

Master Node Configuration

# postgresql.conf - Master Node
max_connections = 1000
superuser_reserved_connections = 10
shared_buffers = 128GB
work_mem = 1GB
maintenance_work_mem = 8GB
dynamic_shared_memory_type = posix

# Optimizer settings
enable_nestloop = off
enable_mergejoin = on
enable_hashjoin = on
enable_seqscan = on
enable_indexscan = on
enable_bitmapscan = on

# Greenplum specific
gp_autostats_mode = on_change
gp_autostats_on_change_threshold = 2147483647
gp_enable_global_deadlock_detector = on
gp_log_gang = terse
gp_max_packet_size = 65536

# Connection and authentication
listen_addresses = '*'
port = 5432
max_prepared_transactions = 500
Enter fullscreen mode Exit fullscreen mode

Segment Node Configuration

# postgresql.conf - Segment Nodes
max_connections = 750
shared_buffers = 120GB
work_mem = 512MB
maintenance_work_mem = 4GB
temp_buffers = 32MB
max_prepared_transactions = 500

# I/O settings
checkpoint_segments = 128
checkpoint_completion_target = 0.9
wal_buffers = 64MB
wal_writer_delay = 10ms

# Query processing
effective_cache_size = 600GB
random_page_cost = 1.0
seq_page_cost = 1.0
cpu_tuple_cost = 0.01
cpu_index_tuple_cost = 0.005
cpu_operator_cost = 0.0025

# Greenplum segment specific
gp_role = segment
gp_contentid = -1  # Set per segment
gp_dbid = 2        # Set per segment
Enter fullscreen mode Exit fullscreen mode

Data Streaming Architecture

Apache Kafka Configuration

Broker Configuration

# server.properties
broker.id=1
listeners=PLAINTEXT://0.0.0.0:9092,SSL://0.0.0.0:9093
advertised.listeners=PLAINTEXT://kafka1:9092,SSL://kafka1:9093

# Network and I/O
num.network.threads=16
num.io.threads=32
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
num.replica.fetchers=4

# Log settings
num.partitions=12
default.replication.factor=3
min.insync.replicas=2
log.retention.hours=168
log.retention.bytes=1073741824000
log.segment.bytes=1073741824
log.cleanup.policy=delete

# Compression and performance
compression.type=lz4
message.max.bytes=10485760
replica.fetch.max.bytes=10485760
group.initial.rebalance.delay.ms=3000

# JVM settings
-Xmx32g -Xms32g
-XX:+UseG1GC
-XX:MaxGCPauseMillis=20
-XX:InitiatingHeapOccupancyPercent=35
Enter fullscreen mode Exit fullscreen mode

Topic Specifications

# Equipment sensor data - high volume, short retention
kafka-topics.sh --create --topic equipment-sensors \
  --partitions 24 --replication-factor 3 \
  --config compression.type=lz4 \
  --config retention.ms=259200000 \
  --config segment.ms=3600000 \
  --config cleanup.policy=delete

# Production events - medium volume, longer retention
kafka-topics.sh --create --topic production-events \
  --partitions 12 --replication-factor 3 \
  --config compression.type=snappy \
  --config retention.ms=2592000000 \
  --config cleanup.policy=delete

# Test results - large messages, medium volume
kafka-topics.sh --create --topic test-results \
  --partitions 8 --replication-factor 3 \
  --config compression.type=gzip \
  --config max.message.bytes=10485760 \
  --config retention.ms=604800000

# Alert events - low volume, immediate processing
kafka-topics.sh --create --topic manufacturing-alerts \
  --partitions 4 --replication-factor 3 \
  --config compression.type=snappy \
  --config retention.ms=86400000 \
  --config cleanup.policy=delete
Enter fullscreen mode Exit fullscreen mode

Apache Storm Configuration

Storm Cluster Specifications

# storm.yaml
storm.zookeeper.servers:
  - "zk1.manufacturing.local"
  - "zk2.manufacturing.local"
  - "zk3.manufacturing.local"
storm.zookeeper.port: 2181

# Nimbus configuration
nimbus.seeds: ["storm-nimbus1", "storm-nimbus2"]
nimbus.host: "storm-nimbus1"
storm.local.dir: "/opt/storm/data"

# Worker configuration
supervisor.slots.ports:
  - 6700
  - 6701
  - 6702
  - 6703
worker.childopts: "-Xmx4g -XX:+UseConcMarkSweepGC"
supervisor.childopts: "-Xmx2g"

# Performance tuning
topology.workers: 8
topology.ackers: 4
topology.max.spout.pending: 10000
topology.message.timeout.secs: 300
Enter fullscreen mode Exit fullscreen mode

Real-time Processing Topology

public class ManufacturingStreamTopology {

    public static class SensorDataBolt extends BaseBasicBolt {
        private OutputCollector collector;
        private Map<String, MovingAverage> equipmentAverages;

        @Override
        public void prepare(Map stormConf, TopologyContext context, 
                           OutputCollector collector) {
            this.collector = collector;
            this.equipmentAverages = new HashMap<>();
        }

        @Override
        public void execute(Tuple input, BasicOutputCollector collector) {
            try {
                String equipmentId = input.getStringByField("equipment_id");
                Double sensorValue = input.getDoubleByField("sensor_value");
                String parameterName = input.getStringByField("parameter");

                // Calculate moving average
                String key = equipmentId + "_" + parameterName;
                MovingAverage avg = equipmentAverages.computeIfAbsent(
                    key, k -> new MovingAverage(100)); // 100-point moving average

                avg.addValue(sensorValue);

                // Check for anomalies (3-sigma rule)
                if (Math.abs(sensorValue - avg.getAverage()) > 3 * avg.getStdDev()) {
                    Values anomalyTuple = new Values(equipmentId, parameterName, 
                                                   sensorValue, avg.getAverage(),
                                                   System.currentTimeMillis());
                    collector.emit("anomaly-stream", anomalyTuple);
                }

                // Emit aggregated data every minute
                if (System.currentTimeMillis() % 60000 < 1000) {
                    Values aggTuple = new Values(equipmentId, parameterName,
                                               avg.getAverage(), avg.getStdDev(),
                                               System.currentTimeMillis());
                    collector.emit("aggregated-stream", aggTuple);
                }

            } catch (Exception e) {
                LOG.error("Error processing sensor data", e);
                collector.reportError(e);
            }
        }

        @Override
        public void declareOutputFields(OutputFieldsDeclarer declarer) {
            declarer.declareStream("anomaly-stream", 
                new Fields("equipment_id", "parameter", "value", 
                          "expected", "timestamp"));
            declarer.declareStream("aggregated-stream",
                new Fields("equipment_id", "parameter", "avg_value",
                          "std_dev", "timestamp"));
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Data Integration Specifications

Apache NiFi Configuration

System Requirements

Hardware:
  CPU: 16 cores minimum
  Memory: 64GB minimum  
  Storage: 10TB for content repository
  Network: 10GbE minimum

NiFi Configuration:
  Java Heap: 32GB
  FlowFile Repository: SSD storage
  Content Repository: High-throughput storage
  Provenance Repository: Separate disk array

# nifi.properties key settings
nifi.flowfile.repository.implementation=org.apache.nifi.controller.repository.WriteAheadFlowFileRepository
nifi.flowfile.repository.wal.implementation=org.apache.nifi.wali.SequentialAccessWriteAheadLog
nifi.flowfile.repository.directory=./flowfile_repository
nifi.flowfile.repository.checkpoint.interval=20 secs

nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository
nifi.content.repository.directory.default=./content_repository
nifi.content.claim.max.appendable.size=50 MB
nifi.content.repository.archive.max.retention.period=7 days
nifi.content.repository.archive.max.usage.percentage=80%

nifi.provenance.repository.implementation=org.apache.nifi.provenance.WriteAheadProvenanceRepository
nifi.provenance.repository.directory.default=./provenance_repository
nifi.provenance.repository.max.storage.time=30 days
nifi.provenance.repository.max.storage.size=10 GB
Enter fullscreen mode Exit fullscreen mode

Data Flow Templates

MES System Integration Flow
<template>
  <name>MES_Integration_Flow</name>
  <description>Extract and transform MES production data</description>

  <!-- Extract from MES database -->
  <processor>
    <id>extract-mes-data</id>
    <class>org.apache.nifi.processors.standard.ExecuteSQL</class>
    <properties>
      <property>
        <name>Database Connection Pooling Service</name>
        <value>mes-db-connection-pool</value>
      </property>
      <property>
        <name>SQL select query</name>
        <value>
          SELECT 
            lot_id, product_id, step_name, equipment_id,
            start_time, end_time, status, operator_id,
            recipe_parameters, quality_flags
          FROM production_tracking 
          WHERE last_modified >= CURRENT_TIMESTAMP - INTERVAL '5 MINUTES'
        </value>
      </property>
    </properties>
    <scheduling>
      <period>5 min</period>
      <concurrent-tasks>1</concurrent-tasks>
    </scheduling>
  </processor>

  <!-- Transform data format -->
  <processor>
    <id>transform-mes-data</id>
    <class>org.apache.nifi.processors.script.ExecuteScript</class>
    <properties>
      <property>
        <name>Script Engine</name>
        <value>python</value>
      </property>
      <property>
        <name>Script Body</name>
        <value><![CDATA[
import json
import uuid
from datetime import datetime

flowFile = session.get()
if flowFile is not None:
    try:
        # Read input data
        content = flowFile.getAttributes()['sql.result.records']
        records = json.loads(content)

        # Transform to standard format
        transformed_records = []
        for record in records:
            transformed = {
                'event_id': str(uuid.uuid4()),
                'event_type': 'lot_tracking',
                'timestamp': datetime.now().isoformat(),
                'lot_id': record['lot_id'],
                'product_id': record['product_id'],
                'process_step': record['step_name'],
                'equipment_id': record['equipment_id'],
                'start_time': record['start_time'],
                'end_time': record['end_time'],
                'status': record['status'],
                'metadata': {
                    'operator_id': record['operator_id'],
                    'recipe_params': json.loads(record['recipe_parameters']),
                    'quality_flags': record['quality_flags']
                }
            }
            transformed_records.append(transformed)

        # Write transformed data
        output_content = json.dumps(transformed_records, indent=2)
        flowFile = session.write(flowFile, output_content)
        session.transfer(flowFile, REL_SUCCESS)

    except Exception as e:
        session.transfer(flowFile, REL_FAILURE)
        session.log.error('Transform error: {}'.format(str(e)))
        ]]></value>
      </property>
    </properties>
  </processor>

  <!-- Load to Greenplum -->
  <processor>
    <id>load-to-greenplum</id>
    <class>org.apache.nifi.processors.standard.PutDatabaseRecord</class>
    <properties>
      <property>
        <name>Record Reader</name>
        <value>json-tree-reader</value>
      </property>
      <property>
        <name>Database Connection Pooling Service</name>
        <value>greenplum-connection-pool</value>
      </property>
      <property>
        <name>Statement Type</name>
        <value>INSERT</value>
      </property>
      <property>
        <name>Table Name</name>
        <value>production.lot_events</value>
      </property>
      <property>
        <name>Batch Size</name>
        <value>1000</value>
      </property>
    </properties>
  </processor>
</template>
Enter fullscreen mode Exit fullscreen mode

Machine Learning Platform Specifications

Apache Spark Configuration

Cluster Specifications

# Spark Master Nodes (2x for HA)
Master_Nodes:
  Hardware: Dell PowerEdge R750
  CPU: 2x Intel Xeon Gold 6338 (32 cores)
  Memory: 256GB DDR4-3200
  Storage: 4x 1.92TB NVMe SSD
  Network: 2x 25GbE

# Spark Worker Nodes (6x)
Worker_Nodes:
  Hardware: Dell PowerEdge R750
  CPU: 2x Intel Xeon Gold 6338 (32 cores)
  Memory: 1TB DDR4-3200
  Storage: 8x 3.84TB NVMe SSD + 4x Tesla V100 GPU
  Network: 2x 25GbE + InfiniBand

# spark-defaults.conf
spark.master=spark://spark-master1:7077,spark-master2:7077
spark.executor.cores=8
spark.executor.memory=120g
spark.executor.memoryFraction=0.8
spark.sql.adaptive.enabled=true
spark.sql.adaptive.coalescePartitions.enabled=true
spark.serializer=org.apache.spark.serializer.KryoSerializer
spark.sql.warehouse.dir=hdfs://namenode:9000/spark-warehouse
Enter fullscreen mode Exit fullscreen mode

ML Pipeline Implementation

from pyspark.sql import SparkSession
from pyspark.ml.feature import VectorAssembler, StandardScaler
from pyspark.ml.classification import RandomForestClassifier
from pyspark.ml.regression import LinearRegression
from pyspark.ml.pipeline import Pipeline, PipelineModel
from pyspark.ml.evaluation import BinaryClassificationEvaluator
from pyspark.ml.tuning import CrossValidator, ParamGridBuilder

class YieldPredictionPipeline:
    def __init__(self, spark_session):
        self.spark = spark_session
        self.model_path = "hdfs://namenode:9000/models/yield_prediction"

    def create_feature_pipeline(self):
        """Create feature engineering pipeline"""

        # Process parameter features
        process_features = [
            'temperature_avg', 'temperature_std', 'temperature_range',
            'pressure_avg', 'pressure_std', 'pressure_range',
            'flow_rate_avg', 'flow_rate_std', 'flow_rate_range',
            'power_avg', 'power_std', 'power_range'
        ]

        # Equipment health features  
        equipment_features = [
            'utilization_rate', 'cycle_count', 'maintenance_hours',
            'failure_count_30d', 'mtbf', 'mttr'
        ]

        # Historical yield features
        historical_features = [
            'prev_lot_yield', 'product_avg_yield_7d', 'product_avg_yield_30d',
            'equipment_avg_yield_7d', 'step_avg_yield_7d'
        ]

        # Quality features
        quality_features = [
            'defect_density', 'critical_dimension_variation',
            'overlay_error', 'thickness_uniformity'
        ]

        all_features = (process_features + equipment_features + 
                       historical_features + quality_features)

        # Feature assembly
        assembler = VectorAssembler(
            inputCols=all_features,
            outputCol="raw_features",
            handleInvalid="skip"
        )

        # Feature scaling
        scaler = StandardScaler(
            inputCol="raw_features",
            outputCol="scaled_features",
            withStd=True,
            withMean=True
        )

        return Pipeline(stages=[assembler, scaler])

    def create_prediction_model(self):
        """Create yield prediction model"""

        # Random Forest for yield classification (Good/Bad)
        rf_classifier = RandomForestClassifier(
            featuresCol="scaled_features",
            labelCol="yield_class",
            predictionCol="yield_prediction",
            probabilityCol="yield_probability",
            numTrees=200,
            maxDepth=15,
            minInstancesPerNode=10,
            seed=42
        )

        # Linear Regression for continuous yield prediction
        lr_regressor = LinearRegression(
            featuresCol="scaled_features",
            labelCol="yield_percentage",
            predictionCol="predicted_yield",
            maxIter=100,
            regParam=0.1,
            elasticNetParam=0.5
        )

        return Pipeline(stages=[rf_classifier, lr_regressor])

    def train_model(self, training_data):
        """Train the complete ML pipeline"""

        # Create complete pipeline
        feature_pipeline = self.create_feature_pipeline()
        prediction_pipeline = self.create_prediction_model()

        complete_pipeline = Pipeline(
            stages=feature_pipeline.getStages() + prediction_pipeline.getStages()
        )

        # Hyperparameter tuning
        param_grid = ParamGridBuilder() \
            .addGrid(complete_pipeline.getStages()[-2].numTrees, [100, 200, 300]) \
            .addGrid(complete_pipeline.getStages()[-2].maxDepth, [10, 15, 20]) \
            .addGrid(complete_pipeline.getStages()[-1].regParam, [0.01, 0.1, 1.0]) \
            .build()

        # Cross validation
        evaluator = BinaryClassificationEvaluator(
            labelCol="yield_class",
            rawPredictionCol="rawPrediction",
            metricName="areaUnderROC"
        )

        cv = CrossValidator(
            estimator=complete_pipeline,
            estimatorParamMaps=param_grid,
            evaluator=evaluator,
            numFolds=5,
            seed=42
        )

        # Train model
        cv_model = cv.fit(training_data)
        best_model = cv_model.bestModel

        # Save model
        best_model.write().overwrite().save(self.model_path)

        return best_model

    def batch_prediction(self, input_data):
        """Perform batch predictions on new data"""

        # Load trained model
        model = PipelineModel.load(self.model_path)

        # Make predictions
        predictions = model.transform(input_data)

        # Select relevant columns
        result = predictions.select(
            "lot_id", "wafer_id", "product_id",
            "yield_prediction", "yield_probability", "predicted_yield",
            "scaled_features"
        )

        return result
Enter fullscreen mode Exit fullscreen mode

Security Specifications

Network Security Architecture

# Network Segmentation
vlans:
  manufacturing_floor:
    id: 100
    subnet: "10.100.0.0/16"
    security_zone: "manufacturing"
    firewall_rules:
      - allow: "tcp/502"  # Modbus
      - allow: "tcp/44818" # OPC UA
      - deny: "any/any"   # Default deny

  data_platform:
    id: 200
    subnet: "10.200.0.0/16" 
    security_zone: "data"
    firewall_rules:
      - allow: "tcp/5432"  # Greenplum
      - allow: "tcp/9092"  # Kafka
      - allow: "tcp/8080"  # NiFi
      - deny: "any/any"

  analytics:
    id: 300
    subnet: "10.300.0.0/16"
    security_zone: "analytics"
    firewall_rules:
      - allow: "tcp/8888"  # Jupyter
      - allow: "tcp/7077"  # Spark
      - deny: "any/any"

  management:
    id: 400
    subnet: "10.400.0.0/16"
    security_zone: "management"
    firewall_rules:
      - allow: "tcp/22"   # SSH
      - allow: "tcp/443"  # HTTPS
      - deny: "any/any"
Enter fullscreen mode Exit fullscreen mode

Authentication & Authorization

# LDAP Integration
ldap_config:
  server: "ldaps://ad.manufacturing.local:636"
  base_dn: "dc=manufacturing,dc=local"
  bind_dn: "cn=svc_greenplum,ou=service_accounts,dc=manufacturing,dc=local"

# Role-Based Access Control
rbac_roles:
  data_engineer:
    permissions:
      - greenplum: ["CONNECT", "CREATE", "SELECT", "INSERT", "UPDATE", "DELETE"]
      - kafka: ["READ", "WRITE", "CREATE_TOPICS"]
      - nifi: ["READ", "WRITE", "MODIFY_FLOW"]

  analyst:
    permissions:
      - greenplum: ["CONNECT", "SELECT"]
      - tableau: ["READ", "CREATE_WORKBOOKS"]
      - jupyter: ["READ", "EXECUTE"]

  operator:
    permissions:
      - dashboards: ["READ"]
      - alerts: ["READ", "ACKNOWLEDGE"]

  administrator:
    permissions:
      - system: ["ALL"]
      - security: ["MANAGE_USERS", "MANAGE_ROLES"]

# SSL/TLS Configuration
ssl_config:
  greenplum:
    ssl_mode: "require"
    ssl_cert: "/etc/ssl/certs/greenplum.crt"
    ssl_key: "/etc/ssl/private/greenplum.key"
    ssl_ca: "/etc/ssl/certs/manufacturing-ca.crt"

  kafka:
    ssl_enabled: true
    ssl_keystore: "/etc/kafka/ssl/kafka.server.keystore.jks"
    ssl_truststore: "/etc/kafka/ssl/kafka.server.truststore.jks"
    ssl_protocols: "TLSv1.2,TLSv1.3"
Enter fullscreen mode Exit fullscreen mode

Monitoring & Alerting Specifications

System Monitoring Stack

# Prometheus Configuration
prometheus:
  retention: "30d"
  scrape_interval: "15s"
  evaluation_interval: "15s"
  storage: "1TB SSD"

  targets:
    greenplum:
      - greenplum-master:9187
      - greenplum-segments:9187
    kafka:
      - kafka-brokers:9308
    storm:
      - storm-nimbus:8080
      - storm-supervisors:8080
    system:
      - all-nodes:9100

# Grafana Dashboards
grafana:
  dashboards:
    manufacturing_overview:
      metrics:
        - equipment_utilization
        - production_throughput  
        - yield_trends
        - alert_summary

    data_platform_health:
      metrics:
        - greenplum_query_performance
        - kafka_throughput
        - storm_processing_latency
        - storage_utilization

    system_infrastructure:
      metrics:
        - cpu_utilization
        - memory_usage
        - disk_io
        - network_traffic

# Alert Rules
alerting_rules:
  critical:
    - name: "greenplum_master_down"
      condition: "up{job='greenplum-master'} == 0"
      for: "30s"

    - name: "high_equipment_downtime"  
      condition: "equipment_downtime_hours > 4"
      for: "5m"

    - name: "yield_drop_significant"
      condition: "rate(yield_percentage[1h]) < 0.8 * rate(yield_percentage[24h:1h])"
      for: "15m"

  warning:
    - name: "kafka_lag_high"
      condition: "kafka_consumer_lag > 100000"
      for: "10m"

    - name: "query_performance_degraded"
      condition: "greenplum_query_duration_p95 > 10"
      for: "15m"
Enter fullscreen mode Exit fullscreen mode

Performance Monitoring KPIs

Component KPI Target Alert Threshold
Greenplum Master Query Response Time <3s avg >10s for 5min
Greenplum Segments CPU Utilization <80% avg >90% for 10min
Kafka Brokers Message Throughput 100K msg/sec <50K msg/sec
Storm Topology Processing Latency <5s avg >30s for 5min
Equipment Integration Data Freshness <60s >300s
Yield Analysis Model Accuracy >95% <90%

Disaster Recovery Specifications

Backup Strategy

backup_strategy:
  greenplum:
    full_backup:
      frequency: "weekly"
      retention: "3 months"
      storage: "tape_library"

    incremental_backup:
      frequency: "daily"
      retention: "30 days"
      storage: "disk_array"

    continuous_wal_shipping:
      frequency: "real-time"
      retention: "7 days"
      target: "dr_site"

  kafka:
    topic_replication:
      target: "dr_kafka_cluster"
      replication_factor: 3

  application_data:
    configuration_backup:
      frequency: "daily"
      storage: "git_repository"

    model_backup:
      frequency: "after_training"
      storage: "hdfs_dr_cluster"

# Recovery Procedures
recovery_procedures:
  rto_targets:
    tier1_systems: "1 hour"    # Critical production systems
    tier2_systems: "4 hours"   # Analytics and reporting
    tier3_systems: "24 hours"  # Development and testing

  rpo_targets:
    production_data: "15 minutes"
    analytical_data: "4 hours"
    historical_data: "24 hours"
Enter fullscreen mode Exit fullscreen mode

This comprehensive technical specification provides the detailed configuration requirements, performance targets, and implementation guidelines necessary for successfully deploying and operating the semiconductor manufacturing data platform with Greenplum at its core. The specifications ensure enterprise-grade reliability, security, and performance while enabling advanced analytics and real-time decision making across all manufacturing processes.

Top comments (0)