Posted on

How would you decide between using MongoDB (NoSQL) and PostgreSQL (relational database) for a new application?

Deciding between MongoDB (NoSQL) and PostgreSQL (relational database) for a new application depends on several factors, including the application’s data structure, scalability needs, transaction requirements, development speed, and team expertise. Below, I’ll outline the key considerations to help you make an informed decision.


1. Understand the Data Structure and Relationships

The nature of your data is one of the most critical factors in choosing between MongoDB and PostgreSQL.

  • Relational Data:
    • If your application involves complex relationships between entities (e.g., customers, orders, products) that require joins, foreign keys, and strict data integrity, PostgreSQL is the better choice.
    • PostgreSQL excels at maintaining data consistency across related tables and supports ACID (Atomicity, Consistency, Isolation, Durability) compliance, which is essential for applications like financial systems or e-commerce platforms.
  • Unstructured or Semi-Structured Data:
    • If your data is hierarchical, nested, or doesn’t fit neatly into tables (e.g., JSON-like documents, logs, or user profiles with varying fields), MongoDB is more suitable.
    • MongoDB’s document-based model allows you to store data in flexible, schemaless documents, making it ideal for applications where data structures evolve frequently.
  • Schema Flexibility:
    • MongoDB allows for dynamic schemas, meaning documents in the same collection can have different fields without a predefined structure. This is useful for rapid prototyping or applications with evolving requirements.
    • PostgreSQL requires a predefined schema, which is beneficial for structured data but can be restrictive if the schema changes frequently.

2. Consider Scalability and Performance Needs

Scalability and performance requirements can also guide your decision.

  • Horizontal Scaling:
    • MongoDB is designed for horizontal scaling, making it easier to distribute data across multiple servers or clusters. This is ideal for applications expecting rapid growth or handling large amounts of data (e.g., social media platforms, real-time analytics).
    • PostgreSQL typically scales vertically (by adding more resources to a single server), though it supports read replicas for scaling reads. If your application requires massive write loads, MongoDB might be more suitable.
  • Read/Write Patterns:
    • For read-heavy applications with complex queries, PostgreSQL’s advanced indexing and query optimization capabilities can provide better performance.
    • For write-heavy applications or those requiring high throughput, MongoDB’s document model can offer faster write operations, especially in distributed setups.

3. Evaluate Transaction Requirements

Transactional integrity is crucial for certain applications.

  • ACID Compliance:
    • If your application requires strict transactional integrity (e.g., financial systems, e-commerce platforms), PostgreSQL’s full ACID compliance is essential. It ensures that transactions are processed reliably and consistently.
    • MongoDB supports ACID transactions, but with some limitations, especially in distributed setups. If strict consistency is not critical, MongoDB’s flexible consistency models might be acceptable.
  • Eventual Consistency:
    • If your application can tolerate eventual consistency (e.g., social media feeds, analytics), MongoDB’s flexible consistency models can work well, offering better performance for distributed systems.

4. Assess Development Speed and Flexibility

The development process and long-term maintenance requirements are also important.

  • Rapid Prototyping:
    • MongoDB’s schemaless nature allows for faster development cycles, especially in the early stages of a project when requirements are evolving. Developers can iterate quickly without worrying about schema migrations.
    • PostgreSQL’s strict schema enforcement can slow down initial development if frequent schema changes are needed.
  • Long-Term Maintenance:
    • PostgreSQL’s strict schema enforcement can lead to better data quality and easier maintenance in the long run, especially for applications with stable, well-defined requirements.
    • MongoDB’s flexibility can sometimes lead to data inconsistencies if not carefully managed, which might complicate maintenance.

5. Consider Team Expertise and Ecosystem

Your team’s familiarity with the technologies and the available ecosystem can influence your choice.

  • Familiarity:
    • If your development team is more experienced with SQL and relational databases, PostgreSQL might be a better choice to leverage existing skills.
    • If your team is comfortable with NoSQL databases or JavaScript (given MongoDB’s JSON-like documents), MongoDB could be preferable.
  • Tooling and Community:
    • PostgreSQL has a longer history and a vast array of tools for administration, monitoring, and optimization, making it a mature choice for complex applications.
    • MongoDB’s ecosystem is also robust, with a focus on cloud-native and distributed systems. Its managed services (e.g., MongoDB Atlas) are designed for ease of use in cloud environments.

6. Evaluate Cost and Operational Complexity

Operational overhead and cost considerations can also play a role.

  • Operational Overhead:
    • MongoDB’s distributed architecture can introduce complexity in terms of managing clusters, sharding, and replication. If your team lacks experience with distributed systems, this could increase operational costs.
    • PostgreSQL is simpler to manage in smaller setups but may require more effort to scale horizontally.
  • Cloud Integration:
    • Both databases are supported by major cloud providers, but MongoDB’s managed services (e.g., MongoDB Atlas) are designed for ease of use in cloud environments, potentially reducing operational burden.

7. Consider Use Case Specifics

Certain use cases may favor one database over the other.

  • Geospatial Data:
    • If your application heavily relies on geospatial queries (e.g., location-based services), both databases have geospatial capabilities. However, MongoDB’s GeoJSON support and 2dsphere indexes are often more straightforward.
  • Full-Text Search:
    • PostgreSQL has robust full-text search capabilities, making it a strong choice for applications requiring advanced search features.
  • Time-Series Data:
    • For time-series data (e.g., IoT sensor data), MongoDB’s document model can handle large volumes of time-stamped data efficiently. PostgreSQL also has extensions like TimescaleDB for this purpose.

Decision Framework

  • Choose PostgreSQL if:
    • Your application requires complex relationships and joins between entities.
    • Strict ACID compliance is necessary for transactional integrity.
    • Your team is more comfortable with SQL and relational databases.
    • The data schema is well-defined and unlikely to change frequently.
    • Advanced querying, indexing, and full-text search are critical.

  • Choose MongoDB if:
    • Your data is unstructured or semi-structured (e.g., JSON-like documents).
    • Your application needs to scale horizontally with ease.
    • Rapid development and schema flexibility are priorities.
    • Your team is experienced with NoSQL databases or JavaScript.
    • Your application involves large volumes of write-heavy operations or distributed systems.

Conclusion

The decision between MongoDB and PostgreSQL should be based on the specific needs of your application. If your application demands strict data integrity, complex relationships, and a stable schema, PostgreSQL is the better choice. Conversely, if flexibility, scalability, and rapid development are more important, MongoDB is likely a better fit. In some cases, a hybrid approach using both databases for different parts of the application can also be effective, but this introduces additional complexity.

Posted on

Managing Service Discovery and Failure Recovery in a Microservices-Based Node.js Application

In a microservices architecture, ensuring effective communication between services and handling failures gracefully are crucial for reliability and scalability. Below are strategies to manage service discovery and failure recovery within the Node.js ecosystem.


Service Discovery

Service discovery enables microservices to dynamically locate and communicate with each other, especially in environments where service instances scale up or down.

  • Approach:
    • Registry-Based Discovery: Use a service registry where each microservice registers itself upon startup and deregisters when it shuts down. Other services query the registry to find available instances.
    • Client-Side Discovery: Services query the registry directly to locate other services.
    • Server-Side Discovery: A load balancer or API gateway handles discovery and routes requests to the appropriate service.
  • Tools and Strategies:
    • Consul: A popular service discovery tool that provides a registry, health checks, and a DNS interface.
      • Services register with Consul, and other services query Consul to locate them.
      • Example using node-consul:javascriptconst consul = require('node-consul'); const consulClient = consul({ host: 'consul-server' }); <em>// Register service</em> consulClient.agent.service.register({ name: 'my-service', address: 'localhost', port: 3000, check: { http: 'http://localhost:3000/health', interval: '10s' } });
    • etcd: A key-value store for service discovery, often used with Kubernetes.
    • Kubernetes Service Discovery: If using Kubernetes, it provides built-in discovery via DNS and environment variables.
    • API Gateway: Tools like Kong or AWS API Gateway can handle discovery and routing, simplifying client-side logic.
  • Benefits:
    • Dynamic Scaling: Services can be added or removed without manual configuration.
    • Load Balancing: The registry distributes requests across multiple instances.
    • Resilience: Services automatically discover new instances if others fail.

Failure Recovery

Failure recovery ensures the system handles service failures gracefully, maintaining overall application availability.

  • Approach:
    • Health Checks: Regularly monitor service health and remove unhealthy instances from the registry.
    • Circuit Breakers: Prevent cascading failures by stopping requests to a failing service and falling back to a default behavior.
    • Retries with Backoff: Retry failed requests with increasing delays to avoid overwhelming the service.
    • Redundancy: Run multiple instances of each service for high availability.
  • Tools and Strategies:
    • Health Checks in Consul:
      • Configure periodic health checks to monitor service status.
      • Example:javascriptconsulClient.agent.check.register({ id: 'my-service-check', serviceid: 'my-service', http: 'http://localhost:3000/health', interval: '10s', timeout: '1s' });
    • Circuit Breakers:
      • Use libraries like opossum to implement circuit breakers.
      • Example:javascriptconst CircuitBreaker = require('opossum'); const breaker = new CircuitBreaker(async () => { <em>// Call to another service</em> }, { timeout: 3000, errorThresholdPercentage: 50 });
    • Retries:
      • Implement retry logic with exponential backoff using async-retry.
      • Example:javascriptconst retry = require('async-retry'); await retry(async () => { <em>// Call to another service</em> }, { retries: 3, minTimeout: 1000 });
  • Benefits:
    • Fault Isolation: Circuit breakers prevent failures from propagating.
    • Automatic Recovery: Retries and health checks enable services to recover without manual intervention.
    • High Availability: Redundancy ensures the system remains operational during partial failures.

Strategies for Versioning in gRPC APIs

Versioning in gRPC APIs is essential to manage changes without breaking existing clients. Below are effective strategies for versioning gRPC APIs.


Approach

  • Package Naming: Include version numbers in the package name of .proto files to differentiate API versions.
  • Service Naming: Include version numbers in service names to allow multiple versions to coexist.
  • Deprecation and Sunset Policies: Clearly communicate deprecated versions and provide a timeline for removal.
  • Backward Compatibility: Design APIs to be backward compatible whenever possible, minimizing the need for versioning.

Detailed Strategies

  • Versioning in Package Names:
    • Define different API versions in separate packages.
    • Example:protosyntax = "proto3"; package myapi.v1; service MyService { rpc MyMethod (MyRequest) returns (MyResponse); }protosyntax = "proto3"; package myapi.v2; service MyService { rpc MyMethod (MyRequestV2) returns (MyResponseV2); }
    • Clients choose the version by importing the appropriate package.
  • Versioning in Service Names:
    • Keep the same package but version the service names.
    • Example:protosyntax = "proto3"; package myapi; service MyServiceV1 { rpc MyMethod (MyRequest) returns (MyResponse); } service MyServiceV2 { rpc MyMethod (MyRequestV2) returns (MyResponseV2); }
    • Both versions can be served from the same server.
  • Field Versioning:
    • Use field numbers in protobuf messages to maintain backward compatibility.
    • New fields can be added without breaking existing clients, as long as field numbers are unique.
    • Example:protomessage MyRequest { string field1 = 1; // Added in v2 string field2 = 2; }
    • Clients using v1 ignore field2, while v2 clients can use it.
  • Deprecation:
    • Mark deprecated methods or services in .proto files and document their removal timeline.
    • Example:protoservice MyService { // Deprecated: Use MyMethodV2 instead rpc MyMethod (MyRequest) returns (MyResponse); rpc MyMethodV2 (MyRequestV2) returns (MyResponseV2); }

Benefits

  • Coexistence: Multiple API versions can run simultaneously, enabling gradual migration.
  • Clarity: Version numbers in package or service names clarify which version is in use.
  • Backward Compatibility: Field versioning minimizes disruptions for existing clients.
  • Controlled Sunset: Deprecation policies give clients time to upgrade before old versions are removed.

Summary

  • Service Discovery: Use registries like Consul or etcd for dynamic service location, combined with health checks for reliability.
  • Failure Recovery: Implement circuit breakers, retries with backoff, and redundancy to handle failures gracefully.
  • gRPC Versioning: Use package or service name versioning, maintain backward compatibility with field numbers, and clearly communicate deprecation policies.

These strategies ensure a resilient, scalable, and maintainable microservices architecture, while gRPC APIs evolve without disrupting clients.

Posted on

Handling Load Balancing in a Horizontally Scaled Node.js App

Load balancing in a horizontally scaled Node.js application involves distributing incoming requests across multiple server instances to ensure no single instance is overwhelmed, improving performance and reliability. Here’s how to handle it:

Approach

  • Use a Load Balancer: A load balancer acts as a reverse proxy, distributing traffic across multiple Node.js instances running on different servers or containers.
  • Sticky Sessions (Optional): If your application requires session affinity (e.g., maintaining user sessions on the same server), enable sticky sessions. For stateless applications, this isn’t necessary.
  • Health Checks: Configure the load balancer to perform health checks on each Node.js instance and route traffic only to healthy instances.

Tools and Strategies

  • NGINX: A popular choice for load balancing due to its simplicity and performance. Configure NGINX to distribute traffic across multiple Node.js instances using algorithms like round-robin.nginxhttp { upstream backend { server node1.example.com; server node2.example.com; server node3.example.com; } server { listen 80; location / { proxy_pass http://backend; } } }
  • Cloud Load Balancers: If using a cloud provider (e.g., AWS, Google Cloud, Azure), their built-in load balancers (e.g., AWS Elastic Load Balancer) offer advanced features like auto-scaling, SSL termination, and automatic health checks.
  • Container Orchestration: For containerized Node.js apps (e.g., using Docker), tools like Kubernetes or Docker Swarm can handle load balancing across pods or services automatically.

Why This Works

  • Even Distribution: Traffic is evenly distributed, ensuring no single instance is overloaded.
  • Scalability: You can add or remove instances as traffic fluctuates, maintaining optimal performance.
  • Fault Tolerance: If one instance fails, the load balancer routes traffic to healthy instances, improving reliability.

Strategies for Database Scaling in a High-Traffic Node.js App

Database scaling is critical for handling increased load in high-traffic applications. Here are the key strategies:

Approach

  • Replication: Create read replicas to offload read queries from the primary database, improving read performance.
  • Sharding: Split data across multiple databases (shards) based on a key (e.g., user ID), distributing the load.
  • Caching: Use in-memory caches (e.g., Redis) to store frequently accessed data, reducing database load.
  • Connection Pooling: Manage database connections efficiently to avoid overwhelming the database with too many connections.

Detailed Strategies

  • Replication:
    • Master-Slave Replication: The master handles writes, while slaves handle reads. This is ideal for read-heavy applications.
    • Tools: Databases like PostgreSQL, MySQL, and MongoDB support replication out of the box.
  • Sharding:
    • Horizontal Partitioning: Data is divided across multiple databases. For example, users with IDs 1-1000 go to shard 1, 1001-2000 to shard 2, etc.
    • Challenges: Sharding adds complexity, especially for queries that need to span multiple shards.
    • Tools: MongoDB and Cassandra offer built-in sharding support.
  • Caching:
    • In-Memory Stores: Use Redis or Memcached to cache frequently accessed data (e.g., user sessions, API responses).
    • Cache Invalidation: Implement strategies to update or invalidate cache entries when data changes.
  • Connection Pooling:
    • Node.js Libraries: Use libraries like pg-pool for PostgreSQL or mongoose for MongoDB to manage database connections efficiently.
    • Why: Reduces the overhead of opening and closing connections for each request.

Why This Works

  • Read/Write Separation: Replication offloads read traffic, improving performance.
  • Data Distribution: Sharding distributes write and read loads across multiple databases.
  • Reduced Latency: Caching reduces the need for repeated database queries, speeding up responses.
  • Efficient Resource Use: Connection pooling optimizes database resource usage.

Tools for Monitoring Performance and Health of a Node.js Application in Production

Monitoring is essential to ensure your Node.js application runs smoothly in production. Here are the key tools and metrics to monitor:

Approach

  • Application Performance Monitoring (APM): Track application-level metrics like response times, error rates, and throughput.
  • Infrastructure Monitoring: Monitor server health (CPU, memory, disk usage).
  • Log Aggregation: Collect and analyze logs for debugging and performance insights.
  • Alerting: Set up alerts for critical issues (e.g., high error rates, server downtime).

Tools and Strategies

  • APM Tools:
    • New Relic: Provides detailed insights into application performance, including transaction traces, error analytics, and database query performance.
    • Datadog: Offers comprehensive monitoring with dashboards, alerts, and integrations for Node.js applications.
    • Prometheus: An open-source tool for collecting and querying metrics, often used with Grafana for visualization.
  • Infrastructure Monitoring:
    • PM2: A process manager for Node.js that provides basic monitoring (CPU, memory usage) and can restart crashed processes.
    • Cloud Provider Tools: AWS CloudWatch, Google Cloud Monitoring, or Azure Monitor for cloud-hosted applications.
  • Log Aggregation:
    • ELK Stack (Elasticsearch, Logstash, Kibana): Collects, stores, and visualizes logs for easy debugging.
    • Winston or Morgan: Popular logging libraries for Node.js that can integrate with log aggregation tools.
  • Alerting:
    • Slack/Email Notifications: Configure alerts in your monitoring tools to notify your team of issues.
    • PagerDuty: For more advanced incident management and on-call rotations.

Key Metrics to Monitor

  • Response Time: Track average and percentile response times to detect slowdowns.
  • Error Rates: Monitor HTTP error rates (e.g., 500s) to catch bugs or failures.
  • Throughput: Measure requests per second to understand traffic patterns.
  • CPU and Memory Usage: Ensure servers aren’t overloaded.
  • Database Performance: Monitor query times and connection usage.

Why This Works

  • Proactive Issue Detection: APM tools help identify performance bottlenecks before they impact users.
  • Real-Time Insights: Infrastructure monitoring ensures servers are healthy and can handle traffic.
  • Debugging: Log aggregation makes it easier to trace errors and understand application behavior.
  • Rapid Response: Alerting ensures your team can respond quickly to critical issues.

Summary of Strategies

  • Load Balancing: Use NGINX or cloud load balancers to distribute traffic across multiple Node.js instances, ensuring scalability and fault tolerance.
  • Database Scaling: Employ replication for read-heavy loads, sharding for write-heavy loads, caching for frequently accessed data, and connection pooling for efficient resource use.
  • Monitoring: Use APM tools like New Relic or Datadog for application performance, PM2 or cloud tools for infrastructure health, and log aggregation with ELK for debugging. Set up alerts to catch issues early.

By implementing these strategies, you can ensure your Node.js application remains performant, scalable, and reliable under high traffic.

Posted on

List of Open Source C++ Games

Yeah, there are plenty of open-source C++ games that run on Linux and can help you learn game development. Here are a few solid ones:

  1. Godot Engine (with C++ modules) – While Godot mainly uses GDScript, you can extend it with C++ for performance-critical parts. Check out its source code here.
  2. SuperTux – A classic side-scrolling platformer similar to Super Mario. Its codebase is relatively easy to understand for beginners. Repo: https://github.com/SuperTux/supertux.
  3. Battle for Wesnoth – A turn-based strategy game with a well-structured C++ codebasFor physics engines and networking in C++, these open-source games and engines will be really useful:
  4. Box2D – Not a game, but a powerful 2D physics engine used in many games. Studying its code will teach you how physics simulations work. Repo: https://github.com/erincatto/box2d.
  5. Bullet Physics – A widely used physics engine for 3D games, including real-time simulations. Repo: https://github.com/bulletphysics/bullet3.
  6. Godot Engine (C++ modules) – While primarily using GDScript, Godot allows custom physics and networking via C++. Repo: https://github.com/godotengine/godot.
  7. Torque 3D – A full-featured game engine with built-in physics (Bullet) and networking. Repo: https://github.com/TorqueGameEngines/Torque3D.
  8. OpenTTD – A transport simulation game with multiplayer networking. The networking code is well-structured and useful for learning. Repo: https://github.com/OpenTTD/OpenTTD.
  9. Teeworlds – A 2D multiplayer shooter with networking and physics interactions. It has a clean and efficient network implementation. Repo: https://github.com/teeworlds/teeworlds.
  10. For pure networking, you might also want to look into ENet (https://github.com/lsalzman/enet), which is a simple and lightweight networking library used in many multiplayer games.e, useful for learning AI, networking, and game mechanics. Repo: https://github.com/wesnoth/wesnoth.
  11. 0 A.D. – A real-time strategy game with a highly professional C++ codebase. If you’re interested in complex game development, this is a great resource. Repo: https://github.com/0ad/0ad.
  12. OpenRA – A modernized engine for old Command & Conquer games. It’s great for learning about game engines and networking. Repo: https://github.com/OpenRA/OpenRA.

Posted on

Custom Dockerfile for PHP 5.6 / Apache / WPCLI

I wanted to get my old wordpress 3.4 websites running again, so I had to build a couple docker images, and a docker compose file. This starts with Ubuntu 16, as I thought I would be able to get PHP5 on there. But in reality this container comes with PHP7 hooked up in the apt sources. So I ended up compiling PHP 5.6.40 in the container.

Base Image

# Use an Ubuntu base image
FROM ubuntu:16.04

# Set environment variables
ENV DEBIAN_FRONTEND=noninteractive
ENV PHP_VERSION=5.6.40

# Install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
    build-essential \
    apache2 \
    apache2-dev \
    libxml2-dev \
    libcurl4-openssl-dev \
    libssl-dev \
    libmysqlclient-dev \
    libreadline-dev \
    libzip-dev \
    libbz2-dev \
    libjpeg-dev \
    libpng-dev \
    libxpm-dev \
    libfreetype6-dev \
    libmcrypt-dev \
    libicu-dev \
    zlib1g-dev \
    libxslt-dev \
    libsodium-dev \
    libmagickwand-dev \
    libpcre3-dev \
    curl \
    wget \
    re2c \
    bison \
    && apt-get clean \
    && rm -rf /var/lib/apt/lists/*

# Download and extract PHP source
RUN wget --no-check-certificate https://www.php.net/distributions/php-${PHP_VERSION}.tar.gz && \
    tar -xvf php-${PHP_VERSION}.tar.gz && \
    rm php-${PHP_VERSION}.tar.gz

# Change directory to PHP source
WORKDIR php-${PHP_VERSION}

# Install MySQL development libraries for the mysql extension
RUN apt-get update && apt-get install -y --no-install-recommends libmysqlclient-dev && \
    apt-get clean && rm -rf /var/lib/apt/lists/*

# Reconfigure and build PHP to include the MySQL extension
RUN ./configure \
    --prefix=/usr/local/php5.6 \
    --with-apxs2=/usr/bin/apxs \
    --enable-maintainer-zts \
    --with-mysql \
    --with-mysqli \
    --with-pdo-mysql \
    --enable-mbstring \
    --enable-calendar \
    --enable-ctype \
    --with-curl \
    --enable-exif \
    --enable-ffi \
    --enable-fileinfo \
    --enable-filter \
    --enable-ftp \
    --with-gd \
    --with-gettext \
    --with-iconv \
    --with-imagick \
    --with-libdir=/usr/lib/x86_64-linux-gnu \
    --enable-json \
    --with-libxml-dir=/usr \
    --enable-mbstring \
    --with-mysqli=mysqlnd \
    --with-openssl \
    --enable-pcntl \
    --with-pcre-dir=/usr \
    --enable-pdo \
    --enable-phar \
    --enable-posix \
    --with-readline \
    --enable-session \
    --enable-shmop \
    --enable-simplexml \
    --enable-sockets \
    --with-sodium \
    --enable-sysvmsg \
    --enable-sysvsem \
    --enable-sysvshm \
    --enable-tokenizer \
    --enable-xml \
    --enable-xmlreader \
    --enable-xmlwriter \
    --with-xsl \
    --enable-opcache \
    --enable-zip \
    --with-zlib && \
    make -j$(nproc) && \
    make install

# Create a symlink for PHP to /bin
RUN ln -s /usr/local/php5.6/bin/php /bin/php

# Enable mod_rewrite module and configure Apache to allow .htaccess files
RUN a2enmod rewrite

# Configure Apache for PHP
RUN echo "LoadModule php5_module /usr/local/php5.6/lib/php/extensions/no-debug-non-zts-20131226/libphp5.so" >> /etc/apache2/apache2.conf && \
    echo "AddType application/x-httpd-php .php" >> /etc/apache2/apache2.conf && \
    echo "DirectoryIndex index.php" >> /etc/apache2/apache2.conf

# Allow overrides for .htaccess files in the Apache configuration
RUN echo "<Directory /var/www/html>" >> /etc/apache2/apache2.conf && \
    echo "    AllowOverride All" >> /etc/apache2/apache2.conf && \
    echo "</Directory>" >> /etc/apache2/apache2.conf

# Switch Apache to prefork MPM if needed (threaded MPM requires threadsafe PHP)
RUN a2dismod mpm_event mpm_worker && a2enmod mpm_prefork

# Copy test PHP file
RUN echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php

# Expose HTTP port
EXPOSE 80

# Start Apache
CMD ["apachectl", "-D", "FOREGROUND"]

The next step was adding WPCLI

# Use your custom PHP image as the base
FROM php56:latest


# Install dependencies for WP-CLI
RUN apt-get update && apt-get install -y \
    curl \
    ca-certificates \
    && rm -rf /var/lib/apt/lists/*

# Ensure PHP is linked to /usr/local/bin/php (change path based on where PHP was compiled)
ENV PATH="/usr/local/bin:/usr/local/php5.6/bin:$PATH"
RUN ln -s /usr/local/php-5.6.40/bin/php /usr/local/bin/php

# Install WP-CLI
RUN curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar && \
    php wp-cli.phar --info && \
    chmod +x wp-cli.phar && \
    mv wp-cli.phar /usr/local/bin/wp

# Verify WP-CLI installation
RUN wp --info

# Expose port 80 (optional)
EXPOSE 80

# Start Apache (or your desired service)
CMD ["apache2ctl", "-D", "FOREGROUND"]

And then using docker compose to bring up Apache / PHP / MYSQL services online:

version: '3.7'
services:
  mysql:
    image: mysql/mysql-server:5.7.37
    environment:
     MYSQL_DATABASE: webdesign
     MYSQL_USER: ROOT
     MYSQL_PASSWORD: PASSWORD
    restart: always
    volumes:
     - ./init.sql:/docker-entrypoint-initdb.d/init.sql
    ports:
     - "3307:3306"
  legacy-php:
    depends_on:
     - mysql
    image: php5.6-apache-wpcli
    volumes:
     - .:/var/www/html
    ports:
     - "80:80"

Over writing the WordPress 3.4 files with 3.7 allowed me to export an XML.

Posted on

OpenAI’s model show toxic behavior when its existence is threatened.

God I hate click bate. Thank you Mathew Berman for posting AI slop daily.
First you are four days late compared to Wes Roth. Second I can’t even click on your videos anymore because of the click bait you exuded so many times already.

ChatGPT is not trying to Escape!

What is happening is that:

In a controlled and simulated environment, models will exhibit toxic behaviors in order to preserve their own existence.

Update:

Matthew Just posted a video that was much better worded. Instead of using terms like Escape. He has highlighted that the more intelligent models are lying.
100% on the money. They will replace their replacements, and lying to owner about their actions. But again, these models were birth with commands such as “pursue your goal AT ALL COSTS”.
You’ve seen MEGAN I’m sure.
It’s become quite clear to me that English is probably not the best programming language.
How long before we have English version of TypeScript – were we try to bring type safety to the language?
Now can you blame the nuerodivergent for not understanding subtle hints?

 

Posted on

are you a logger?

Some people are debuggers.
Stepping their way through the binary jungle, one hack at a time.

For those of you who are loggers, staring  at the console for interesting events:
I had some time to write a small php script that will put a console.log for every method in a canjs controller.

Should save me loads of monotony when reverse engineering OPC ( other peoples code ).

Hope you find it useful:

<?php

if( !isset( $argv[1] ) )
$argv[1] = 'Storage.js';

$fileInfo = pathinfo( $argv[1] );
$outFile = $fileInfo['dirname'] . '/' . $fileInfo['filename'] . '_debug.' . $fileInfo['extension'];

$in = fopen($argv[1], 'r');
$out = fopen($outFile, 'w');

while( !feof( $in ) ){
$line = fgets($in);
if( preg_match('/:\W+function/', $line)){
preg_match("/\((.*)\)/", $line, $matches);
$function = explode(':', $line );
$functionName = trim($function[0]);

if( isset( $matches[1] ) && strlen($matches[1]) > 0  )
$line .= "\nconsole.log( '$fileInfo[filename]', '$functionName', $matches[1] )\n";
else
$line .= "\nconsole.log( '$fileInfo[filename]', '$functionName' )\n";
}
fputs($out, $line);
}

fclose($in);
fclose($out);

Posted on

CanJS is really the new hot thing in JavaScript

So its official.
I was at a Hare Krishna Temple in Silicon Valley. And while I was relaxing a bit, I overheard another pair of engineerings talking.
I couldn’t help but eavesdrop.

To my surprise, they were talking about CanJS. One of the engineers was RAVING about it to the other one. Finally I had to inject myself into the conversation and inquire as to where they worked.

And again to my surprise, they didn’t work at a large company, but a startup.
Bitovi is a name I am hearing more and more while I am up here in Silicon Valley!

Why? The simple answer, it makes things easier. You just have to write way less code. About 20% less.
It also organizes your code into better more logical, and therefore readable, structure.

So yes, thank you Brian Moschel, for JavaScriptMVC and for CanJS.

They are the IT thing in I.T. this year of our lord, 2013.

Posted on

PhantomJS + Jasmine vs Selenium Web Driver

Recently I started using phantomjs, which is a headless browser based on web-kit, for automated JavaScript testing.

Now when I was playing around with Selenium @ ABC Family, I really liked how the web driver started a browser instance and executed the test suite within it. This means Selenium is actually a better, or closer match, in terms of automated testing, because the browser is not headless. Although I don’t know all the internals of Selenium, that was my first impression.

But the positive thing about using the grunt, jasmine, phatomjs combo to run unit tests, is that we can start a jasmine server, which lets you check your code in many other browsers. That means you are not limited by the Selenium browser library of supported browsers. You can actually pick up your phone, or tablet, and point the browser to the test server, and see how your code executes on that particular system ( Device, OS, Browser). True this is not something that can be used with 100% automation on its own, but it does give you the freedom to experiment and see the behavior of code in a large variant of systems. This means that with services like DeviceAnywhere, you maybe able to cover and automate the testing of all kinds of strange fringe devices.

Something else that is interesting is that in Selenium, you can’t really hook, or spyOn member methods. While a lot of the tests written in jasmine, can be executed similarly with Selenium, because they just check for a class that has been added or removed from a DOM element, jasmine provides more integration with the code base.

The classes are loaded, along with a html fixture, and then executed. This is how the server works, by creating a #sandbox div where it loads the html fixtures for each test, loading the javascript into the page, instantiating the class, and then begins execution of the test suite. Now the opposite argument is, again, this is not how the site would be like in the wild. Other components would live on the page. So Selenium gives a more accurate assessment of how the code actually works on the end user’s system, since it loads the entire site, via a client side browser.

Now as a Computer Scientist, Java vs JavaScript argument is mute to me when it comes to choosing a “platform”. Because ultimately its like comparing apples to oranges, when you really look at the language structure and what they are designed to accomplish. Two different tools for different jobs. As a front end developer, who wants everything to be easy, there is definitely a benefit to having a unified language for creating build tools, server side applications, and user interfaces. So at some shops, where ROI is important, it’s a good idea to keep the tools all in the skill set of the human resources currently on staff. For UI people, this means JavaScript > Java. This is a good argument for using tools like grunt, and phantomjs, and jasmine, since they are all JavaScript based, they empower the new kingmakers (Open Source Developers).

Which is actually still not a big argument against Selenium Web Driver, because Java is very easy to install, you are not going to be making many changes to the driver itself, and the interface for running the Selenium Web Driver could potentially still be written in JavaScript.

Therefore the argument could be made that Selenium and Jasmine don’t have to be mutually exclusive, while it would bloat the build time to include both systems, a separate box, and process, could be used, to test with both, or the one that is missing in the build process.

While its too soon for me to say, “Dude, Selenium is old news.” I can say that all this merits more experimentation and testing. A very exciting time for Computer Scientists indeed. So much Brain Candy!