2023-03-31

is there a way to exclude some words from being translated when using google cloud API on javascript?

is there a way to exclude some words from being translated when using google cloud API?

As an example lets say we are trying to translate to spanish the following sentence:

On my "playbook" the dogs where funny.

In which everything gets translated with the exception of "Playbook".

Extra notes: only javascript can be used here -in the example i tried to add the words "playbook" and ricky to not be translated. -i dont have the choice to add an exeption glossary

Hope you guys can help, been trying to find a proper solution to exclude some specific words.

 // Check if the text contains the words "playbooks" or "ricky"
    if (text.indexOf("playbooks") === -1 && text.indexOf("ricky") === -1) {
      var xhr = new XMLHttpRequest();
      xhr.open("POST", "https://translation.googleapis.com/language/translate/v2");
      xhr.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
      xhr.onload = function() {
        if (xhr.status === 200) {
          var response = JSON.parse(xhr.responseText);
          var translatedText = response.data.translations[0].translatedText;
          
          // Replace all occurrences of quotation marks
          translatedText = translatedText.replace(/"/g, '"');
          
          // Replace the ampersand character with "y"
          translatedText = translatedText.replace(/&/g, "y");

          // Replace the word "playbooks" (with word boundary)
          translatedText = translatedText.replace(/\bplaybooks\b/gi, 'playbooks');


how to run Valgrind with python process?

We have docker containers which use python services. We want to check for memory leaks in some of the python processes. We tried attaching valgrind with our python process after setting ENV variable PYTHONMALLOC=malloc in our docker file, but we are not able to get proper stack trace.

Command used:

/usr/bin/valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --trace-children=yes --track-origins=yes

Any idea what needs to be done to use valgrind with python process inside docker containers?



React app deployment on GitHub Pages displays a blank page

I've developed a react app that is running fine locally. I'm now trying to deploy it using GitHub Pages.

Despite following this tutorial, I get a blank page, and several 404 errors in the logs.

Here is my index.js:

import React from 'react';
import ReactDOM from 'react-dom/client';
import {HashRouter} from "react-router-dom";
import './index.css';
import App from './App';

ReactDOM.createRoot(document.getElementById('root')).render(
    <React.StrictMode>
        <HashRouter>
            <App />
        </HashRouter>
    </React.StrictMode>
);

And the Route defined in App.js

return (
    <>
        <Navbar/>
        <Routes>
            <Route path='/' element={<CoinsTable coins={coins}/>}/>
            <Route path='/coin' element={<CoinDetails/>}>
                <Route path=':coinId' element={<CoinDetails/>}/>
            </Route>
            <Route path='/option-prices' element={<CoinOptionsTable/>}>
                <Route path=':coinId' element={<CoinOptionsTable spotValue={1500} />}/>
            </Route>
        </Routes>

    </>
);

Here is the homepage property I've added to the project's package.json:

"private": false,
"homepage": "https://myusername.github.io/myusername/myappname",

I've added this to scripts:

"predeploy": "npm run build",
"deploy": "gh-pages -d build",

Then I run this command to deploy the app:

npm run deploy


memory leak of virtual memory

I run tensorflow on linux (ubuntu20). TF executes my c++ functions for graph compilation/destruction.

The consumption of the process virtual memory grows until out-of-memory (>40GB) and the process is killed.

I track malloc/free and mmap/munmap with LD_PRELOAD hook and compare with the process virtual memory consumption from /proc/self/status (VmSize). Each graph compilation increases both malloc-allocated and the process virtual memory by almost the same size.

Graph destruction decreases malloc-allocated size but not the process virtual memory.

So in spite of malloc-allocated memory staying overall stable, the process virtual memory grows fast.

e.g.:

before compile: 41MB[mmap]/3320MB[malloc]/12428MB[process]
after  compile: 46MB[mmap]/7434MB[malloc]/16529MB[process]
before destroy: 46MB[mmap]/7436MB[malloc]/16593MB[process]
after  destroy: 46MB[mmap]/3250MB[malloc]/16593MB[process]

graphDestroy does not destroy everything by design so a small leftover is expected.

I tried to play with mallopt(M_MMAP_THRESHOLD) with no result.

What else can be done in order to find the leak?

UPDATE:

i add here steps that i tried and a way that worked - maybe it can be useful to someone.

the functions themselves are tested with sanitizers in unit tests. valgrind crashes the app before the main training loop starts. So this direction was a dead end.

i wanted to collect memory stats from glibc. unfortunately mallinfo is useless, mallinfo2 is not available on ubuntu20 and malloc_info prints too much. so i tried to use jemalloc and its function malloc_stats_print for stats.

the stats looked ok. but app behavior changed - virtual memory still grew (up to 75GB) but the resident memory stayed stable (~20GB) and the app worked with no memory issues.

then i tried to run my app without jemalloc but with a periodic call of malloc_trim(0) and it behaved the same way as jemalloc (virtual memory grew but resident memory stayed stable).

conclusion: sometimes malloc_trim can fix an issue that looks like a leak.

a good article about it



remove stripes / vertical streaks in remote sensing images

I have a remote sensing photo that has bright non continuous vertical streaks or stripes as in the pic below, my question is there a way to remove them using python and opencv or any other ip library? enter image description here,



2023-03-30

How to fetch user data?

firestore contents I am trying to fetch my user data from firestore I set up a model class that’s an observable object to fetch the document data from Firestore to appear on the user interface but I been receiving this error. How do I fix this code to get user data from Firebase ?

Cannot convert value of type 'String' to expected argument type 'URL'

class NewUserData: ObservableObject {
    
    @Published var datas = [UserModelFile]()
    
   
    
    func fetchUser() {
        let db = Firestore.firestore()
      let ref = db.collection("user")
            
        ref.getDocuments { snapshot, error in
            guard error == nil else {
                print(error!.localizedDescription)
                return
            }
        
            if let snapshot = snapshot {
                for document in snapshot.documents {
                    let data = document.data()
                    
                    let username = data["username"] as? String ?? ""
                    let bio = data["bio"] as? String ?? ""
                    let profileImages = data["bio"] as? String ?? ""
                    
                    let data = UserModelFile(username: username, bio: bio
                                             , ProfileImages: profileImages)
                    self.datas.append(data)
                }
 


Change User Email in MongoDB via Web App created with Node, Express, EJS, and Mongoose

I have a simple blogging web application. I have set up the ability for users to log in and I am now working on account management. The first part of the account management will be for users to change their email.

I have an account.ejs page that includes a nav.ejs, displays the email of the currently logged in user, and then a form for which the user can update their email.

The form is simple, it asks for the new email and then includes a second text box to confirm their changed email, and these text boxes must match to proceed.

Here is where I am having trouble - I have a signup.ejs page with a form.addEventListener that handles input into a form, and if all is well, I use a res = await fetch() to send to an authRoutes.js which in turn, is handled by an authController.js

I am trying to adjust my account.ejs page such that it contains a form allowing a user to update their email in my MongoDB. I am never able to move from the form inside account.ejs to my accountPost method inside authController unless I changed the variable name of const form = document.querySelector('form'); to const form2 = document.querySelector('form');

My authController.js will then update the User's email in MongoDB if I adjust the form variable to form2, within the document.querySelector('form'); I am having trouble understanding why this is and what I am doing wrong.

account.ejs:

<html lang="en">
<%- include("./partials/head.ejs") %>

<body>
  <%- include("./partials/nav.ejs") %>

  <div class="account content">
    <div>
      <h2 class="management-header">Account Management for <%= user.email %></h2>
    </div>
    <div class="alter-email content">
        <p>Change Email</p>
        <hr>
        <form class="alter-email-form" action="/account/<%= user._id %>" method="POST">
          <label for="oldEmail">Old Email</label>
          <input type="text" id="oldEmail" name="oldEmail" required>
          <label for="newEmail">New Email</label>
          <input type="text" id="newEmail" name="newEmail" required>
          <button>Update Email</button>
        </form>
      </div>

  </div>

  <%- include("./partials/footer.ejs") %>

  <script>
    //THIS IS THE PROBLEM
    const form2 = document.querySelector('form');
    form.addEventListener('submit', async (e) => {
        e.preventDefault();
        //get values
        const oldEmail = form.oldEmail.value;
        const newEmail = form.newEmail.value;
        try {
            const res = await fetch('/account', {
                method: 'POST',
                body: JSON.stringify({ oldEmail, newEmail }),
                headers: { 'Content-Type': 'application/json' }
            });
            const data = await res.json();
            console.log(data);
            if(data.user) {
                location.assign('/blogs');
            }
        }
        catch (err) {
            console.log(err);
        }        
    });
  </script>
</body>
</html>

accountPost in authController.js:

const accountPost = async (req, res) => {    
    const id = req.params.id;
    const {newEmail, oldEmail} = req.body;
    console.log(newEmail, oldEmail);
    let user = await User.findById(id);
    user.updateOne({
        '_id': id,
        'email': newEmail
    })
    .then(result => {
        res.redirect('/');
    })
}

module.exports = {
    accountPost
}

app.js

const express = require('express');
const morgan = require('morgan');
const mongoose = require('mongoose');
const blogRoutes = require('./routes/blogRoutes');
const authRoutes = require('./routes/authRoutes');
const cookieParser = require('cookie-parser');
const { checkUser } = require('./middleware/authMiddleware');
require('dotenv').config();

//express app
const app = express();

//mongoDB connection string
const dbURI = `mongodb+srv://${process.env.blog_username}:${process.env.blog_password}@nodecourse.h4qkmfb.mongodb.net/nodeCourse?retryWrites=true&w=majority`;

mongoose.connect(dbURI)
    .then((result) => app.listen(3000))
    .catch((err) => console.log(err));

//register view engine
app.set('view engine', 'ejs');

app.get('/', (req, res) => {
    res.redirect('/blogs');
});

app.get('/about', (req, res) => {
    res.render('about', { title: 'About'});
});

app.use(authRoutes);

//404 page
app.use((req, res) => {
    res.status(404).render('404', {title: '404'});
})


CS50W - Network - Like Button working but Like count is not updating

I am currently working on Project4 from CS50W. The task is to write a social media like site where users can post, follow and like.

I have implemented a like button to every post (which is wokring fine) unfortunately I have to refresh the page for the like to show. I whould rather want the like to update directly after clicking the like button.

I am creating the div for the posts via Javascript and calling the like_post function onclick

function load_posts() {

  // get posts from /posts API Route
  fetch('/all_posts')
  .then(response => response.json())
  .then(posts => {

    // create a div element for each post
    posts.forEach(post => {
        let div = document.createElement('div');
        div.className = "card post-card";
        div.innerHTML = `
        <div class="card-body">
          <h5 class="card-title">${post['username']}</h5>
          <h6 class="card-subtitle mb-2 text-muted">${post['timestamp']}</h6>
          <p class="card-text">${post['text']}</p>
          <button class="card-text like-button" onclick="like_post(${post.id});"><h3> ♥ </button> </h3>
          ${post['likes']}
        </div> 
        `;
        // append div to posts-view
        document.querySelector('#posts-view').append(div);
    });
  });
}

function like_post(post_id) {
  fetch('/post/' + post_id, {
    method: 'PUT',
    body: JSON.stringify({
        like: true
    })
  });
}

this is my view funtion to handle most post related requests

@csrf_exempt
@login_required
def post(request, post_id):
    
    # Query for requested post
    try:
        post = Post.objects.get(pk=post_id)
        user = User.objects.get(username=request.user)
    except Post.DoesNotExist:
        return JsonResponse({"error": "Post not found."}, status=404)

    # Return post contents
    if request.method == "GET":
        return JsonResponse(post.serialize())

    # Update likes
    elif request.method == "PUT":
        data = json.loads(request.body)
        if data.get("like") is not None and data.get("like") is True:
            if post.likes.filter(username=user.username).exists():
                post.likes.remove(User.objects.get(username=user.username))
            else: 
                post.likes.add(User.objects.get(username=user.username))
            return HttpResponse(status=204)
        
    # Post must be via GET or PUT
    else:
        return JsonResponse({
            "error": "GET or PUT request required."
        }, status=400)

I wanted to add a return HttpResponseRedirect(reverse("index")) or return redirect(reverse('index')) underneath the post.like.add and post.like.remove to redirect to the current page, but this makes the return HttpResponse(status=204) unreachable which results in this error:

Forbidden (CSRF token missing.): /
[28/Mar/2023 22:35:55] "PUT / HTTP/1.1" 403 2506


Remove comments from SQL/PLSQL blocks

I was looking for a way to remove comments from SQL/PLSQL blocks. It should follow the following criteria:

  1. Single line comments (--) should be removed.
  2. Multi line comments (/**/) should be removed.
  3. But most importantly if these comments come inside strings (single or double quotes) they should be ignored.

I have tried several regexes but non of then are able to capture what I need. like for example:

  1. --(?!.*(['""])[^'""]*\1)[^'\n\r]* -> for single line comments
  2. (''.*?''|".*?")|/\*.*?\*/|--.*?(?=$|\Z) -> for all cases

The second regex I found from here, this does not for all the cases.

Can someone please provide a sample using the regex in c# regex engine.

PS : should I be proceeding with a Regex matching approach?



How to adjust SameSite

I am heavily a beginner and I'm very confused as to how to implement changing the SameSite attribute.

There does seem plenty of similar posts , I understand I need to change the SameSite to sameSite: 'none', secure: true - I'm just not sure where to place it within my code.

I am building a website using html and javascript, testing on a local server using Node.js.

I understand there is an example that shows me the adjustment, I'm just confused as to where in my code to make such an adjustment.

This is a result of the following error:

Because a cookie’s SameSite attribute was not set or is invalid, it defaults to SameSite=Lax, which prevents the cookie from being sent in a cross-site request. This behavior protects user data from accidentally leaking to third parties and cross-site request forgery. Resolve this issue by updating the attributes of the cookie: Specify SameSite=None and Secure if the cookie should be sent in cross-site requests. This enables third-party use. Specify SameSite=Strict or SameSite=Lax if the cookie should not be sent in cross-site requests.

The cookie is to keep a user logged in over multiple pages using firebase authentication - do I need to specify the specific cookie? How does this effect security?

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Example</title>
</head>
<body>

    <div id="loggedOut">
        <h3>please log in.</h3>
        <form onsubmit="login(event)">
        
            <input type="text" id="email" name="email" placeholder="your@email.com">
            <input type="text" id="password" name="password" placeholder="password">
            <button type="submit" id="logIn" value="Login">login.</button>
            
        </form>
    </div>
    
</body>

<script type="module" >
    // FIREBASE CONFIG

    // Import the functions you need from the SDKs you need
    import { initializeApp } from "https://www.gstatic.com/firebasejs/9.18.0/firebase-app.js";
    import { getDatabase, set, ref, onValue } from "https://www.gstatic.com/firebasejs/9.18.0/firebase-database.js";
    import { getAuth, signInWithEmailAndPassword, setPersistence, browserLocalPersistence } from "https://www.gstatic.com/firebasejs/9.18.0/firebase-auth.js";

    
    // TODO: Add SDKs for Firebase products that you want to use
    // https://firebase.google.com/docs/web/setup#available-libraries

    // Your web app's Firebase configuration
    const firebaseConfig = {
        apiKey: "xx",
        authDomain: "xx",
        projectId: "xx",
        storageBucket: "xx",
        messagingSenderId: "xx",
        appId: "xx",
        databaseURL : "https://"
    };

    // Initialize Firebase
    const app = initializeApp(firebaseConfig);
    const database = getDatabase(app);
    const auth = getAuth();
     
    
    //const auth = getAuth(app);

    logIn.addEventListener('click', (e) => {

        var email = document.getElementById('email').value;
        var password = document.getElementById('password').value;

        signInWithEmailAndPassword(auth, email, password)
            .then((userCredential) => {
                // Signed in 
                const user = userCredential.user;

                window.location = './home.html';
                // ...
            })
            .catch((error) => {
                const errorCode = error.code;
                const errorMessage = error.email;
                alert(errorMessage);
    });

    
    const user = auth.currentUser;
    

    if (user) {
        // User is signed in, see docs for a list of available properties
        // https://firebase.google.com/docs/reference/js/firebase.User
        // ...
        
        const displayName = "users" + user.uid;
        alert(displayName) 
        
        const starCountRef = ref(database, displayName + '/username');
        onValue(starCountRef, (snapshot) => {
            const data = snapshot.val();
            alert(data)
        });
    } else {
        // No user is signed in.
        alert('error')   
    } 
})

</script> 


<script>
    function login(event) {
        event.preventDefault()
    }    
    function logout() {
    }
</script>

</html>


LocalDateTime add millisecond

I want to increase the millisecond value with LocalDateTime. I used plusNanos because I didn't have plusmillisecond. I wonder if this is the right way. I'm using JDK 1.8. I also want to know if there is a plus millisecond function in later versions.

DateTimeFormatter f = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS");
LocalDateTime ldt = LocalDateTime.parse("2022-01-01 00:00:00.123",f);
        
System.out.println(ldt.format(f));
        
ldt = ldt.plusNanos(1000000);
        
System.out.println(ldt.format(f));
2022-01-01 00:00:00.123
2022-01-01 00:00:00.124


2023-03-29

C++ HashTable Read access violation

I am building a HashTable in C++ which takes a name of a person as the key and stores his/her favorite drink, the hashtable is defined as below:

class HashTable {
private:
    static const int m_tableSize = 10;

    struct item {
        std::string name;
        std::string drink;
        item* next;
    };

    item* Table[m_tableSize];

I am using the constructor to fill every bucket in the hashtable as "empty":

HashTable::HashTable()
{
    for (int i = 0; i < m_tableSize; i++)
    {
        Table[i] = new item;
        Table[i]->name = "empty";
        Table[i]->drink = "empty";
        Table[i]->next = NULL;
    }
}

This code as it is works but, here's the question:

As far as I know, Table[i] = new item; is supposed to allocate each first item of the bucket in the heap instead of allocating in the stack, but if I try to supress this line, in other words, if I want the first item of each bucket to be allocated in the stack, the compiler throws a read access violation exception, why?

I don't necessarily want the first item of each bucket to be allocated in the stack, but I don't understand the compilers behavior in this case. I know it may be a little bit basic, but can some help?



PySpark add rank column to large dataset

I have a large dataframe and I want to compute a metric based on the rank of one of the columns. This metric really only depends on two columns from the dataframe, so I first select the two columns I care about, then compute the metric. Once the two relevant columns are selected, the dataframe looks something like this:

score     | truth
-----------------
0.7543    | 0
0.2144    | 0
0.5698    | 1
0.9221    | 1

The analytic that we want to calculate is called "average percent rank" and we want to calculate it for the ranks of data where truth == 1. So the process is to compute the percent rank for every data point, then select the rows where truth == 1, and finally compute the average percent rank of those data points. However, when we try to compute this, we get OOM errors. One of the issues is that using the pyspark.sql function rank requires using Window, and we want the window to include the entire dataframe (same fore percent_rank). Some code:

w = Window.orderBy(F.col("score"))

avg_percent_rank = (
    df
    .select("score", "truth")
    .withColumn("percent_rank", F.percent_rank().over(w))
    .filter(F.col("truth") == 1)
    .agg(F.mean(F.col("percent_rank")))
)

This results in an OOM error. There are over 6 billion records, and we need to build this for datasets that may be a hundred times larger. Ultimately, the critical operation is the sorting and indexing; we can derive percent_rank from this by dividing by the total number of rows.

Is there a better approach to computing rank than using a Window function?



Yet another

I'm using EasyPhp DevServer with Apache 2.4.25, PHP 5.6.30, and PHP 7.1.3 installed.

For times, I usually switch between those two PHP versions, without changing any configuration element anywhere, and it works fine.

Now I just added PHP 8.1.2, and when I switch to it I get the well known error message:

PHP Warning: PHP Startup: Unable to load dynamic library 'openssl' (tried: C:\Program Files (x86)\EasyPHP-Devserver-17\eds-binaries\php\php812vs16x86x230327010135\ext\php_openssl (Le module spécifié est introuvable), C:\Program Files (x86)\EasyPHP-Devserver-17\eds-binaries\php\php812vs16x86x230327010135\ext\php_php_openssl.dll (Le module spécifié est introuvable)) in Unknown on line 0

Obviously I read a lot of answers about that, but no one seem to be of help in my case, since the php.ini for 8.1.2 includes the same needed specifications than the two other ones:

  • extension_dir = "C:\Program Files (x86)\EasyPHP-Devserver-17\eds-binaries\php\php812vs16x86x230327010135\ext", which is the correct path where resides php_openssl.dll
  • extension=openssl, rather than extension=php_openssl.dll for the older versions (but the error message says it did search for php_openssl.dll)

Also, despite of what is said in https://www.php.net/manual/en/openssl.installation.php:

Note to Win32 Users
In order for this extension to work, there are DLL files that must be available to the Windows system PATH.
(...)
This extension requires the following files to be in the PATH: libeay32.dll, or, as of OpenSSL 1.1, libcrypto-*.dll

openssl is correctly loaded by PHP 5.6.30 and PHP 7.1.3 although nor the DLL nor other cited files are present in the Windows system PATH.

So I remain stuck: what else can I try?



How to prevent Bootstrap collapse from overflowing inside an overflow-y-scroll container?

I have a container and an overflow-y-scroll CSS property. Inside it, there's a Bootstrap collapse button that expands and shows additional content when clicked. However, when the content expands, it overflows outside of the container, making it hard to read, and it messes up the page.

I've tried setting the height of the collapse feature and its parent div to 100%, but it didn't solve the issue. I expected the collapse feature to expand without overflowing outside of the container, so that users can scroll down to see the entire content.

A visualization of the issue I'm experiencing.

<div class="container mw-25">
    <div class="text-start text-white px-4">Knowledge Requirements</div>
    <!-- Outer box -->
    <div class="bg-white w-100 h-75 mt-2 mx-3 border rounded-2 rounded-end-0 p-2 overflow-y-scroll">
        <!-- Inner box -->
        <div class="form-check">
            <!-- Checkbox -->
            <input class="form-check-input shadow-none text-start" type="checkbox" value="" id="flexCheckDefault">
                <!-- Collapse button -->
                <a class="btn-sm w-50 text-dark no-hover pb-1 fw-semibold text-decoration-none bg-transparent" data-bs-toggle="collapse" href="#collapseExample" role="button" aria-expanded="false" aria-controls="collapseExample">
                    Writing  <span class="badge bg-secondary rounded-pill">Show</span>
                </a>
                <!-- Collapse feature -->
                <div class="collapse" id="collapseExample">
                    <!-- Collapse information-->
                    <label class="p-2 form-check-label border-bottom text-small text-wrap" for="flexCheckDefault">
                        <span class="fw-semibold">A:</span> blurred for privacy reasons
                    </label>
                    <label class="p-2 form-check-label border-bottom text-small text-wrap" for="flexCheckDefault">
                        <span class="fw-semibold">C:</span> blurred for privacy reasons
                    </label>
                </div>

        </div>
        <div class="form-check">
            <!-- Temporary secondary checkbox -->
            <input class="form-check-input shadow-none text-start" type="checkbox" value="" id="flexCheckChecked" checked>
                <label class="form-check-label" for="flexCheckChecked">
                    Checked checkbox
                </label>
        </div>
    </div>

</div>


Not able to Install Mongo Db on AWS Linux 2023

I'm trying to install mongodb in Amazon Linux with below configuration

NAME="Amazon Linux"
VERSION="2023"
ID="amzn"
ID_LIKE="fedora"
VERSION_ID="2023"
PLATFORM_ID="platform:al2023"
PRETTY_NAME="Amazon Linux 2023"
ANSI_COLOR="0;33"
CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2023"
HOME_URL="https://aws.amazon.com/linux/"
BUG_REPORT_URL="https://github.com/amazonlinux/amazon-linux-2023"
SUPPORT_END="2028-03-01"

I tried to install using Tarball and repo. However it got failed with below error


[root@ip ~]# mongod --version

mongod: /lib64/libcrypto.so.10: version `OPENSSL_1.0.2' not found (required by mongod)

mongod: /lib64/libcrypto.so.10: version `libcrypto.so.10' not found (required by mongod)

mongod: /lib64/libssl.so.10: version `libssl.so.10' not found (required by mongod)

[root@ip ~]# openssl version

OpenSSL 3.0.8 7 Feb 2023 (Library: OpenSSL 3.0.8 7 Feb 2023)

[root@i ~]#

tar -zxvf mongodb-linux-x86_64-amazon2-6.0.5.tgz
mongod version


cbind/join complex dataframes of different lengths without identifier

Thank you so much in advance for any help--

I am trying to rbind/join two dataframes that do not have a unique identifier- the format is complex because of how they were webscraped.

df1 contains assay results, with 1 row for each assay occurring on a day, and a row (ID, Name, Concentration) separating out assays that happened on different days. df2 contains 1 row for the date of the assays. I need to bind the relevant assay date (df2) to each of the assay results (df1).

df1 = data.frame(matrix(0, 13, 3))
df1$X1 = c("ID","1","2","3","4","5","ID","1","2","3","ID","1","2")
df1$X2 = c("Name","Jose","Mary","Doug","Luisa","Pam","Name","Jose","Doug","Lou","Name","Luisa","Pam")
df1$X3 = c("Concentration","4.2","2.3","7.3","1.4","0.5","Concentration","0.1","2.3","2.1","Concentration","9.0","1.4")


df2 = data.frame(matrix(0, 3, 3))
names(df2) = c("X4", "X5", "X6")
df2$X4 = c("Monday", "Tuesday", "Friday")
df2$X5 = c("January", "February", "March")
df2$X6 = c("12", "4", "21") 
df2

In the end I want the dataframe to look like this: [desired outcome: (https://ift.tt/LqwnYOa)

So far I have tried to create an identifier for all assays that occurred on the same day (separated by the ID, Name, Concentration rows), but have not been successful because the number of assays per day varies greatly (in reality I have >200,000 assays from dozens of days).

Thanks for any help!



How can I configure AMQ broker on Openshift 4 to ensure messages are consumed by only one pod in a publish-subscriber model?

I am using Openshift 4 to deploy multiple microservices which are connected via an AMQ broker using a publish-subscribe messaging model. However, when I increase number of pod to 2, I am running into an issue where all pods are consuming the same message, rather than just one.

Can someone suggest how to configure java code below or the AMQ broker to ensure that messages are consumed by only one pod in a publish-subscriber model? Are there any specific settings I should be aware of or changes I need to make to my configuration? Thank you.

Configuration:

@Bean
public JmsListenerContainerFactory<DefaultMessageListenerContainer> jmsListenerContainerPublisherFactory(ConnectionFactory connectionFactory,
                                                                                                         DefaultJmsListenerContainerFactoryConfigurer configurer) {
    DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
    factory.setErrorHandler(throwable -> {
        log.info("An error has occurred in the transaction: " + throwable.getMessage());
        log.error("Error: ", throwable);
    });
    configurer.configure(factory, connectionFactory);
    factory.setPubSubDomain(true);
    return factory;
}

Listener:

@JmsListener(destination = "${queue.dummyObject}",
        containerFactory = "jmsListenerContainerPublisherFactory")
public void onConsumePublishedMessage(String message) throws JsonProcessingException {
    DummyObjectDTO dummyObjectDTO = mapper.readValue(message, DummyObjectDTO.class);
    LOG.info(" Received onConsumePublishedMessage message : " + dummyObjectDTO);
}

Producer:

private  <T> T  sendFeatured(T value, String queue, boolean publish, Selector selector) {
    *
    *
    jmsTemplate.setPubSubDomain(true);
    jmsTemplate.convertAndSend(queue, objectAsJson);
    *
    *
    return value;
}


2023-03-28

COGNOS SQL: COLUMN 1+'|'+COLUMN 2 +'|'+COLUMN 3 keeps only first row information instead of concatenating just columns for each row

I have a database with 3 columns and I want to generate a report with only one column, that is the concatenation of column 1-ID, column 2-DATE and column 3-LOCATION separated with pipeline:

ID DATE LOCATION
10 20230325 UK
11 20230325 UK
11 20230325 US
12 20230323 PT

I have tried concatenating the columns with '+' into a new column 4 like so: DATE+'|'+ID+'|'+LOCATION

However, when I prepare a report and only select column 4, I am only getting one row instead of the 4 rows of data in the database:

Current output of column 4--> 1 row only with the following value:

10|20230325|UK

Expected output of column 4--> 4 rows with the following values:

10|20230325|UK
11|20230325|UK
11|20230325|US
12|20230323|PT

How can I concatenate the 3 columns with pipeline delimiter and generate a report where all rows are shown instead of just the first row?



C++ multithread joining only working on the first thread but not working for other threads

I was curious as to why the code below works and executes fine for the n = 1 case, but when I make n anything greater than 1 I get a system error most of the time.

I also know there is joinable() for checking if a thread is done but it is terrible since it blocks the main thread. If there is a more elegant solution to checking when a thread is done than passing a boolean pointer to a method, that would be appreciated.

#include <iostream>
#include <thread>
#include <atomic>
#include <array>

void my_function(bool* is_thread_done) {
    std::cout << "Thread started" << std::endl;
    std::this_thread::sleep_for(std::chrono::seconds(1));
    std::cout << "Thread finished" << std::endl;
    *is_thread_done = true;
}

int main() {   
    const int n = 1;
    std::array<std::thread, n> arr;
    std::array<bool, n> is_thread_done;
    for (int i = 0; i < n; i++) {
        is_thread_done[i] = false;
        arr[i] = std::thread(my_function, &is_thread_done[i]);
    }
    
    int all_done = 0;
    while (all_done < n) {
        for (int i = 0; i < n; i++) {
            if (is_thread_done[i]) {
                arr[i].join();
                all_done++;
            }
        }
    }

    return 0;
}

I have tried joinable but that is not useful because of blocking.

To answer the comments:

  1. The order that I join the threads doesn't matter, but a loop is just the way I'm doing it.
  2. I cannot join the thread before its done because it will block the main thread. I simplified this code from my actual use for it and I cannot have the main thread be blocked. This is the general premise though. That is why I also cannot use joinable() to join.
  3. I have tried atomic<bool> and it did not seem to have any effect. I don't know why this would matter though however since by the time the boolean becomes set to true, the thread would be done.

Here is the code form the thread pool I was making. n threads working on some task k times. Since k is greater than n, when a thread finishes, a new thread is dispatched to continue working on the task. The main thread needs to dispatch another thread immediately, not wait for n thread to finish and then dispatch another thread. In that case its acting like a barrier which I don't need.

template<typename Func, typename... Args>
void ThreadPool::thread_pool_executor(Func func, Args... args) {
    while (running) {
        while (free_indecies.size() > 0 && running) {
            int i = free_indecies.front();
            pool[i] = std::make_unique<std::thread>(
                &ThreadPool::worker<Func, Args...>, this,
                &avalability[i], func, args...);
            free_indecies.pop();
            avalability[i] = false;
            current_iter++;
            if (current_iter == total_iter) running = false;
        }
        for (int i = 0; i < max_threads; i++) {
            if (avalability[i]) {
                pool[i]->join();
                free_indecies.push(i);
            }
        }
    }
    for (int i = 0; i < max_threads; i++) {
        if (pool[i]->joinable()) {
            pool[i]->join();
        }
    }
    reset();
}

template<typename Func, typename... Args>
void ThreadPool::worker(std::atomic<bool>* complete, Func func, Args... args) {
    std::invoke(func, args...);
    *complete = true;
}

I realized what Ben was saying and changed the loop in the first code segment to the following and now that simpler example works. I created a set and initialized it to hold the numbers 1-n initially.

while (not_done.size() > 0) {
    for (int i : not_done) {
        if (is_thread_done[i]) {
            arr[i].join();
            not_done.erase(i);
        }
    }
}

I also realized that in the threadpool example where I am trying to apply this idea, I should have had availability[i] = false right before I make the thread to avoid a race condition. However the code from that case still has the thread::join failed: Invalid argument Abort trap: 6 error



Transform Excel table into lists below each table parameter (and skip blanks)

I know there are similar questions, and I've tried to use all of the codes mentioned in them - but something isn't working for me. Please help.

My input is an excel table set up like this:

VISITCODE dm1 dm2 dm3 dm4
thing1 B A A
thing2 A B B
thing3 A B A
thing4 B B A

enter image description here

I'd like the output to look something like this:

enter image description here



toString and function issue

class Addresses {
    var streetname: String
    var streetnumber : Int
    var suburb: String
    var postcode:Int
    
    init( streetname:String, streetnumber:Int, suburb:String, postcode:Int ) {
        self.streetname = streetname
        self.streetnumber = streetnumber
        self.suburb = suburb
        self.postcode = postcode
    }
    func toString( streetname:String, streetnumber:Int, suburb:String, postcode:Int ) -> (String) {
        return (streetnumber),streetname, suburb, postcode
    }
}

I'm trying to create class named Address which stores the postcode, suburb, street name and street number of a building. An initialiser as well as a toString method which returns a String of a human-readable version of the class. This error message keeps popping up Cannot convert return expression of type 'Int' to return type 'String'



Network interface drop 1518 bytes Ethernet frame received from direct Ethernet link cable sent by tcpreplay. why?

I have two server linked to each other. I tried to send a pcap from one of the servers via direct link using tcpreplay command. The pcap contains an HTTP POST session which size of some frames is 1518 bytes [Ethernet header(14 bytes)][payload(1500 bytes)][fcs(4 bytes)], but the receiver server's interface drops packets with size over 1514 bytes.

Everything works correctly when I remove last 4 bytes(FCS) of all packets and send pcap. Or when I change MTU of the sender and receiver interface from 1500 to 1504. I can understand why the sender's interface MTU needs to be 1504. But why does the receiver's interface MTU need to be 1504? I expect the receiver's interface doesn't consider FCS bytes just like Ethernet header, because This is what is happening when the actual Ethernet frame of size 1518 is coming toward receiver's interface from internet.

Is there any difference between when the receiver consumes Ethernet frames from internet and when two servers are linked and one is sending to another by tcpreplay?

Thanks in advance.



Install fontawsome on Rails 7.0.4.3

How can I install fontawsome for icons on Rails 7.0.4.3 and ruby 3.2.1

I tried three methods:

1. yarn add @fortawesome/fontawesome-free + import "@fortawesome/fontawesome-free/js/all"
2. ./bin/importmap pin @fortawesome/fontawesome-free + import "@fortawesome/fontawesome-free"
3. gem "font-awesome-sass", "~> 6.3.0" +  bundle install + @import "font-awesome"

But when I test with an icon it is not rendering.



How Can I Write An Integration Tests For A Rust CLI Tools That Use Inquire?

Here is an example of some code I would like to test:

fn main() {
    let ans = Confirm::new("Yes or no?")
        .with_default(false)
        .with_help_message("(It's not a trick question)")
        .prompt();

    let res =  match ans {
        Ok(true) => "You said yes!".to_string(),
        Ok(false) => "You said no...".to_string(),
        Err(_) => "Error with questionnaire, try again later".to_string(),
    };

    println!("{}", res);
}

I am trying to use the "Command" library to write a test, but I have having some problems. My first problem is sending the different types of user input when prompted (eg. enter with no input, y, Y, yes, Yes, YES, n, no, No, NO, nO, false, foo) and then asserting that the appropriate response is logged to the console.

I would also like to assert that the correct initial prompt message and help message are logged to the console. How can I do this?

Here' my not working test:

#[test]
fn asks_are_you_cool() -> Result<(), Box<dyn std::error::Error>> {

    let output_bytes = Command::cargo_bin("cool_bool")?.output().unwrap().stdout;

    let output_str = match str::from_utf8(&output_bytes) {
        Ok(val) => val,
        Err(_) => panic!("got non UTF-8 data from stdout"),
    };

    assert_eq!(output_str, "That's too bad, I though you might have been.\n");

    Ok(())
}

Thanks!



2023-03-27

Google Map Custom Markers from URL slow to load

Problem, markers take about 10 seconds to load despite the fact that they are very small icons. I'm sure there's a problem with the way I am calling them to load. I would expect them to be nearly instant. I have a working Google Maps example linked here. You'll see the markers download slowly. I have another example of this in FlutterMap which works fine and loads everything immediately. What do I have to do to get my markers to load all at once on the Google Map?

import 'package:cloud_firestore/cloud_firestore.dart';
import 'package:firebase_core/firebase_core.dart';
import 'package:flutter/material.dart';
import 'package:google_maps_flutter/google_maps_flutter.dart';
import 'package:custom_marker/marker_icon.dart';
import '../../models/marker_collect_model.dart';

class PhotoCustomMap extends StatefulWidget {
  @override
  _PhotoCustomMapState createState() => _PhotoCustomMapState();
}

class _PhotoCustomMapState extends State<PhotoCustomMap> {
  List<Marker> list = [];
  List<String> listDocuments = [];

  Future<void> readDataFromFirebase() async {
    await Firebase.initializeApp();
    FirebaseFirestore firestore = FirebaseFirestore.instance;
    CollectionReference<Map<String, dynamic>> collectionReference =
    firestore.collection('NWNSPHOTOS2023');
    collectionReference.snapshots().listen((event) async {
      List<DocumentSnapshot> snapshots = event.docs;
      for (var map in snapshots) {
        Map<String, dynamic> data =
        map.data() as Map<String, dynamic>; // add this line
        MarkerCollectModel model =
        MarkerCollectModel.fromMap(data); // use data here
        String nameDocument = map.id;
        listDocuments.add(nameDocument);
        Marker marker = await createMarker(model, nameDocument);
        setState(() {
          list.add(marker);
        });
      }
    });
  }

  Future<Marker> createMarker(
      MarkerCollectModel markerCollectModel, String nameDocument) async {

    BitmapDescriptor bitmapDescriptor = await MarkerIcon.downloadResizePictureCircle(
        markerCollectModel.pathImageSmall!,
        addBorder: true,
        borderColor: Colors.blue,
        borderSize: 5,
        size: 50
    );


    Marker marker;
    marker = Marker(
      markerId: MarkerId(nameDocument),
      position: LatLng(markerCollectModel.lat!, markerCollectModel.lng!),
      icon: bitmapDescriptor,
    );
    return marker;
  }

  Set<Marker> myMarkers() {
    return list.toSet();
  }

  // Method
  @override
  void initState() {
    super.initState();
    readDataFromFirebase();
  }

  @override
  Widget build(BuildContext context) {
    WidgetsFlutterBinding.ensureInitialized();
    Firebase.initializeApp();
    CameraPosition cameraPosition =
    CameraPosition(target: LatLng(47.088717, -122.496509), zoom: 7.7);
    return Scaffold(
        body: Center(
            child: Stack(
              children: <Widget>[
                GoogleMap(
                  initialCameraPosition: cameraPosition,
                  markers: myMarkers(),
                ),
              ],
            )));
  }
}


TIMEOUT no longer waits as stated

I originally posted this on Microsoft Answer Community website but got told they don't deal with this sort of question (funny, as I suspect it's a bug in one of their CMD.EXE commands or some related thing).

..... Here's my question...

Suddenly Windows 11 cmd.exe command TIMEOUT is failing for me in a CMD file - it doesn't wait the prescribed time.

e.g. TIMEOUT /T 8 reports the starting seconds for countdown and immediately exits.

In debugging things, I found that this does not occur if I manually enter it into a CMD.EXE window prompt. Further testing shows that it actually the first call of TIMEOUT in a CMD file that fails .. subsequent ones work as documented. My current circumvention is to code a TIMEOUT /T 1 at the start of the CMD file to ensure the 'real' ones behave.

A simple batch file demonstrates the effect (on my machine at least).

Batch file (saved as "test.CMD" in my case)... NB editor seems to want to combine some lines shown below .. if you use the code, just add the newlines back in.

echo on 

timeout.exe /T 8 

echo. 

echo now invoking with full path ...

%SystemRoot%\System32\timeout.exe /T 8 

echo. 

echo now invoking choice ... 

choice /T 8 /C Y /D Y /M "Press Y to continue (or wait for timeout)"

echo. 

echo do a PAUSE so you can see the results on the screen before batch file finishes running

PAUSE

Having saved this file, double-click on it run it. The first call of timeout fails, the second seems to always work. Doesn't matter if you put the full path on the timeout.exe command or not, nor in what order.

Anyone else encountered this problem? I have only been able to find the circumvention I mentioned above.

Thanks.

======================== Added as per Mofi's comment:

C:\Users\lorde>set path Path=C:\Program Files (x86)\Common Files\Oracle\Java\javapath;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\WindowsPowerShell\v1.0;C:\WINDOWS\System32\OpenSSH;C:\Users\lorde\AppData\Local\Microsoft\WindowsApps; PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC

C:\Users\lorde>%SystemRoot%\System32\reg.exe query HKCU\Environment /v Path

HKEY_CURRENT_USER\Environment Path REG_EXPAND_SZ %USERPROFILE%\AppData\Local\Microsoft\WindowsApps;

C:\Users\lorde>%SystemRoot%\System32\where.exe timeout C:\Windows\System32\timeout.exe

C:\Users\lorde>%SystemRoot%\System32\reg.exe query "HKCU\Software\Microsoft\Command Processor" /v Autorun ERROR: The system was unable to find the specified registry key or value.

C:\Users\lorde>%SystemRoot%\System32\reg.exe query "HKCU\Software\Microsoft\Command Processor" /v Autorun ERROR: The system was unable to find the specified registry key or value.

C:\Users\lorde>



Rails: Two Applications using the same database (WebService <--> API)

Here on Stack has the same question that I wanted to do but my question has one different aspect.

Today I have a rails application that's managing the web application and API but wanna separate it to manage the server resources more efficiently.

The question is: How App1 could know when App2 inserts a new record? Rpush? Sql Trigger? Or transform Api into a gem?

App1 - WebService Port 80 using DB1

class Post < ActiveRecord::Base
 belongs_to :user
 before_save :do_stuff_when_app2_insert_record #I know this doesn't work in this environment  

 def do_stuff_when_app2_insert_record
  ...
 end
end


App2 - API connection - Port 8080 using DB1

class Post < ActiveRecord::Base
 belongs_to :user
end

Post.create(name: 'post')


Update randomly indexed database columns in PHP [closed]

I am pushing a hidden field into the database that will be updated as tickets get bought; this is the field that inserts the column: <input type="hidden" class="other-bookable-tickets-sold" name="_other_tickets[-1][sold]" value="0">

What I have tried:

In the database it creates columns for each field added e.g _other_tickets[0][sold], _other_tickets[1][sold], _other_tickets[2][sold] e.t.c

How do I access and update these random columns?

// update tickets sold
function update_ticket_sold( array $tickets, $listing_id )
{
    $other_tickets = get_post_meta( $listing_id, '_other_tickets', true );
    if (isset($tickets) && is_array(($tickets))) {
        $i = 0;
        foreach ($other_tickets as $key => $ticket) {
            if (in_array(sanitize_title($ticket['name']), array_column($tickets,'ticket'))) { 
                $ids = array_keys($ticket);
                $column = '_other_tickets[' . $ids . '][sold]';
                $already_sold_tickets = (int) get_post_meta($listing_id, $column,true);
                $sold_now = $already_sold_tickets + (float) $countable[$i];
                update_post_meta($listing_id, $column, $sold_now);
                $i++;
            }
        } //end foreach
    }
}


Problem with redux-toolkit (createAsyncThunk) in react native expo app

Hello and thank you in advance for your help.

My problem is on redux-toolkit on a react-native application with Expo. To put you in the context I am quite a beginner.

Here is my code :

export const fetchStationsInformations = createAsyncThunk(
  "stations/fetchStationsInformations",
  async () => {
    console.log(process.env.NODE_ENV);
    if (process.env.NODE_ENV === "test") {
      return require("@data/stationsInformations.json");
    }
    const response = await api.get("/stationsInformations");
    return response.data;
  }
);

export const fetchStationsStatus = createAsyncThunk(
  "stations/fetchStationsStatus",
  async () => {
    console.log(process.env.NODE_ENV);
    if (process.env.NODE_ENV === "test") {
      return require("@data/stationsStatus.json");
    }
    const response = await api.get("/stationsStatus");
    return response.data;
  }
);

I would like to understand why, when in the above code when I let I have in my file the functions fetchStationsInformations and fetchStationsInformations I get this error :

error

error

ERROR  [Error: Exception in HostFunction: Compiling JS failed: 2:20:invalid expression Buffer size 613 starts with: 5f5f642866756e6374696f6e28676c6f and has protection mode(s): rw-p]

While the method fetchStationsStatus is not used and fetchStationsInformations used. I try to clear cash with "expo start --clear".

But if I delete the fetchStationsInformation method then it works. I have looked at a lot of documentation and stackoverflow but I can't find a solution.

Thank you!



2023-03-26

Angular Cannot read properties of undefined in html

I'm an angular rookie and working on getting better at it.

I have the three files where I'm using a service(Studentservice.ts) which emits an observable(ShowBeerDetails method) and I later subscribe to it in my component(Beer-details.component.ts).

I made sure that I'm receiving the observable values by console logging in my components onInit method and the value of beerObj prints just fine.

The issue is I get the below error in my HTML when the view is loaded. I think the reason is shared service isn't resolving beerObj variable before view tries to reach it.(I may be wrong about this).

I tried adding <div *ngIf="beerobj"> statement which prevents the console error but since beerObj still remains undefined I'm not seeing my values(name, price, and Id) printed.

I'm not sure to fix this. I have read multiple SF questions but none helped me. ERROR TypeError: Cannot read properties of undefined (reading 'imageUrl') Angular material

enter image description here

Studentservice.ts

import { EventEmitter, Injectable, Output } from '@angular/core';
import { HttpClient } from '@angular/common/http';
import { studentInterface } from './studentInterface';
import { Observable } from 'rxjs';
import { beer } from './beerInterface';

@Injectable({
  providedIn: 'root'
})
export class StudentService {

  constructor(private _http:HttpClient) { }
  private _url="../assets/data/student.json";

  getStudents(): Observable<studentInterface[]>{  
//return [{"id":1, "name":"Rama"},{"id":2, "name":"Bheema"},{"id":3, "name":"Hanuman"}];

return this._http.get<studentInterface[]>(this._url);

}

getBeerList(): Observable<beer[]>{

  return this._http.get<beer[]>('https://api.sampleapis.com/beers/ale');
}

@Output() OnShowDetailsClicked= new EventEmitter<beer>;

ShowBeerDetails(beerobj:beer){
  console.log('Value of beer passed in showuserdetials'+beerobj.name);
this.OnShowDetailsClicked.emit(beerobj);
}
}

Beer-details.component.ts

import { Component, OnInit } from '@angular/core';
import { ActivatedRoute, ParamMap } from '@angular/router';
import { beer } from '../beerInterface';
import { StudentService } from '../student.service';

@Component({
  selector: 'app-beer-details',
  templateUrl: './beer-details.component.html',
  styleUrls: ['./beer-details.component.css']
})
export class BeerDetailsComponent implements OnInit {

  constructor(private _activatedRoute: ActivatedRoute,private _stdservice:StudentService) { }

  beerobj!: beer;
  myname:string="sreemanth";

  ngOnInit(): void {


    this._stdservice.OnShowDetailsClicked.subscribe((data:beer)=>{
      this.beerobj=data;
      console.log('Value set to beer in subscribe'+this.beerobj.name);
      console.log('the emitted beer value in subscribe'+data.name)});

  }
}

Student.component.ts

import { Component, OnInit } from '@angular/core';
import { ActivatedRoute, ParamMap, Router } from '@angular/router';
import { Beer } from '../beerInterface';
import { StudentService } from '../student.service';


@Component({
  selector: 'app-student',
  templateUrl: './student.component.html',
  styleUrls: ['./student.component.css']
})


export class StudentComponent implements OnInit {

public selectedid:any;
public beers:Beer[]=[];


  constructor(private _router: Router,private _activatedRoute: ActivatedRoute,private _stdservice:StudentService) {

   }



  ngOnInit(): void {

    this._stdservice.getBeerList().subscribe(data=>this.beers=data);


    this._activatedRoute.paramMap.subscribe((params: ParamMap) =>{
      let id=parseInt(params.get('id')||'');
      this.selectedid=id;
    });
  
  }

  onselect(x: { id: any; }){
//this._router.navigate(['/studentdetails',x.id])
this._router.navigate([x.id],{relativeTo:this._activatedRoute})
  }


  onBeerselect(x:Beer){
    console.log('Value of beer passed'+x.id);
    this._stdservice.ShowBeerDetails(x);
    this._router.navigate(['/beer-list',x.id])
   // this._router.navigate([x.id],{relativeTo:this._activatedRoute})
      }


  isSelected(x:any){
return x.id===this.selectedid;

  }

  public studentdetials=[
{"id":1,"name":"sreemanth","grade":"A"},
{"id":2,"name":"robert","grade":"B"},
{"id":3,"name":"karim","grade":"C"}
  ];

}

beer-details.component.html

<!--<div *ngIf="beerobj">-->
<p>beer-details works!</p>
<p></p>
<p></p>
<p></p>
<!--</div>-->

I have checked my code at https://github.com/Jasti4Git/helloworld



Apple Metal lineStrip how to draw thicker lines

I am developing an iOS application using XCode 14.2, and my deployment target is iOS 16.2. I have a list of X,Y, and Z values that I want to draw with Metal using lineSrip. This works, however the line that is drawn is too thin for my purposes. I've read about various strategies on how to thicken the line and I've decided to attempt to draw the same line many iterations with a small amount of noise each time to give the appearance of a thicker line.

I generate 3 random floats each time through the render loop and send them to my vertex shader with a uniform. The issue is that the resulting line seems to be more periodic than random and more iterations does not seem to give the appearance of a thicker line.

How can I draw thicker lines using this strategy? Thank you.

enter image description here enter image description here

Draw many iterations:

// Draw many iterations
for iteration in 1...1024 {
    scene.track?.draw(encoder: commandEncoder,
                      modelMatrix: accumulatedRotationMatrix,
                      projectionMatrix: projectionMatrix * viewMatrix,
                      secondsInEpoch: Float(self.epochTime))
}

Random floats:

var jitter = 1.0 / Float(self.screenSizeX) - 1 / Float(self.screenSizeY)
var jitterX = Float.random(in: -jitter...jitter)
var jitterY = Float.random(in: -jitter...jitter)
var jitterZ = Float.random(in: -jitter...jitter)

Vertex Uniform:

struct VertexUniforms {
    
    var viewProjectionMatrix: float4x4
    var modelMatrix: float4x4
    var normalMatrix: float3x3
    var jitterX: Float
    var jitterY: Float
    var jitterZ: Float
    var iteration: Float
}

Draw primitives call:

encoder.drawPrimitives(type: .lineStrip , vertexStart: 0, vertexCount: vertices.count / 3)

Vertex shader:

// Calculate the jitter for X/Y/Z
//float subFactor = 0.0099;
float subFactor = 0.0105;
float smallFactorX = (subFactor * uniforms.jitterX);
float smallFactorY = (subFactor * uniforms.jitterY);
float smallFactorZ = (subFactor * uniforms.jitterZ);
if (vertexId % 2 == 0) {
    vertexOut.position.x += (vertexOut.position.x * smallFactorX);
    vertexOut.position.y += (vertexOut.position.y * smallFactorY);
    vertexOut.position.z += (vertexOut.position.z * smallFactorZ);
} else {
    vertexOut.position.x -= (vertexOut.position.x * smallFactorX);
    vertexOut.position.y -= (vertexOut.position.y * smallFactorY);
    vertexOut.position.z -= (vertexOut.position.z * smallFactorZ);
}

return vertexOut;


Stack and explode columns in pandas

I have a dataframe to which I want to apply explode and stack at the same time. Explode the 'Attendees' column and assign the correct values to courses. For example, for Course 1 'intro to' the number of attendees was 24 but for Course 2 'advanced' the number of attendees was 46. In addition to that, I want all the course names in one column.

   import pandas as pd
import numpy as np
df = pd.DataFrame({'Session':['session1', 'session2','session3'],
                    'Course 1':['intro to','advanced','Cv'],
                    'Course 2':['Computer skill',np.nan,'Write cover letter'],
                    'Attendees':['24 & 46','23','30']})

If I apply the explode function to 'Attendees' I get the result

Course_df = Course_df.assign(Attendees=Course_df['Attendees'].str.split(' & ')).explode('Attendees')

    Session        Course 1 Course 2           Attendees
0   session1       intro to     Computer skill     24
0   session1       intro to     Computer skill     46
1   session2       advanced.    NaN                23

and when I apply the stack function

Course_df = (Course_df.set_index(['Session','Attendees']).stack().reset_index().rename({0:'Courses'}, axis = 1))

This is the result I get

  Session     level_1             Courses      Attendees
0  session1  Course 1            intro to        24
1  session1  Course 2      Computer skill        46
2  session2  Course 1            advanced        23
3  session3  Course 1                  Cv        30

Whereas the result I want is

   Session     level_1             Courses      Attendees
0  session1  Course 1            intro to        24
1  session1  Course 2      Computer skill        46
2  session2  Course 1            advanced        23
3  session3  Course 1                  Cv        30
4  session3  Course 2   Write cover letter        30


UseLazyLoadingProxies with EF core loads whole list when adding a new child entity

I'm using UseLazyLoadingProxies with EF core, postgresql and DDD architecture. I have Parent and Child objects lets say they look something like this:

public class Parent 
{
    public int Id { get; private set;}
    public string Name {get; private set}
    public virtual ICollection<Child> Children {get; private set}

    public void AddChild(Child child)
    {
        Children.Add(child);
    }
}

public class Child 
{
    public int Id { get; private set;}
    public string Name {get; private set}
}

when I use parent.Add(child) proxy does its thing and pulls whole list of child elements before adding new child in the list, is there a way to work around it? because child elements can be 50 000 and right now its pulling all the rows from database for no reason other than to add a new child entity



Troubleshooting Google Domains hosting with Amazon Route 53 -> Application Load Balancer

I have a domain managed on Google Domains and would like it to point to an Amazon route 53. I've configured the custom name servers on Google to point to the ones for my public hosted zone, and using WHOIS can confirm that these are the name servers being returned when I try to access the domain. The problem is that when I navigate to this domain in my browser it says "Server not found." I have an alias record set up in Route 53, of type A, that points to the Application Load Balancer running my application (and another alias for www.domain.com). When I hit "test record", and select type A it returns an IP address that I can go to and verify that it is running my app.

TLDR: WHOIS shows the Amazon name servers, testing the records in Route 53 returns IPs from an alias'd A record that are indeed hosting my app, but when I navigate to domain.com in my browser nothing happens.

Thanks!

I've tried updating the type of the record in Route 53 to both AAAA and CNAME (changing what they point to respectively) with no luck. I've tried destroying the hosted zone and making it again.

UPDATE: Feel quite stupid, but waiting ~36 hours worked. I guess it just took awhile, I was only expecting it to take up to 24.



Can I initialize 2 Flutter apps?

I have a site, mysyte.com, where I have two Flutter apps, one in the root folder mysite.com/ and the other at mysite.com/second/.

I wanted to know if by going directly to mysite.com/second/, instead of only uploading the second app, is it possible to upload also the app in root folder?

So if I switch from the second to the first via url_launcher I don't have to wait for flutter to load...

I don't have much web skills so maybe it's a problem there.

I searched the documentation and online but couldn't find anything



2023-03-25

microprofile-openapi-api 2.0 switches off hibernate-jpamodelgen

I have a java microservice-project with theese two dependencies defindes in my gradle.build file:

implementation(group: 'org.eclipse.microprofile.openapi', name: 'microprofile-openapi-api', version: '1.2')

annotationProcessor('org.hibernate.orm:hibernate-jpamodelgen:6.1.6.Final')

This works, meaning that i get my JPA metamodels created. But as soon as i bump microprofile-openapi-api to 2.0 or above, jpamodelgen no longer gets run (no errors, it just does not run). It is like clockwork, that going above 1.2 creates the problem.

Any ideas as to what is causing this behavior and what can be done ?



RuntimeError: GET was unable to find an engine to execute this computation when I using Trainer.train() from hungingface

RuntimeError                              Traceback (most recent call last)
Input In [46], in <cell line: 1>()
----> 1 train_results = trainer.train()
      2 wandb.finish()

File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1543, in Trainer.train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
   1538     self.model_wrapped = self.model
   1540 inner_training_loop = find_executable_batch_size(
   1541     self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size
   1542 )
-> 1543 return inner_training_loop(
   1544     args=args,
   1545     resume_from_checkpoint=resume_from_checkpoint,
   1546     trial=trial,
   1547     ignore_keys_for_eval=ignore_keys_for_eval,
   1548 )

File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1791, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval)
   1789         tr_loss_step = self.training_step(model, inputs)
   1790 else:
-> 1791     tr_loss_step = self.training_step(model, inputs)
   1793 if (
   1794     args.logging_nan_inf_filter
   1795     and not is_torch_tpu_available()
   1796     and (torch.isnan(tr_loss_step) or torch.isinf(tr_loss_step))
   1797 ):
   1798     # if loss is nan or inf simply add the average of previous logged losses
   1799     tr_loss += tr_loss / (1 + self.state.global_step - self._globalstep_last_logged)

File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2539, in Trainer.training_step(self, model, inputs)
   2536     return loss_mb.reduce_mean().detach().to(self.args.device)
   2538 with self.compute_loss_context_manager():
-> 2539     loss = self.compute_loss(model, inputs)
   2541 if self.args.n_gpu > 1:
   2542     loss = loss.mean()  # mean() to average on multi-gpu parallel training

File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:2571, in Trainer.compute_loss(self, model, inputs, return_outputs)
   2569 else:
   2570     labels = None
-> 2571 outputs = model(**inputs)
   2572 # Save past state if it exists
   2573 # TODO: this needs to be fixed and made cleaner later.
   2574 if self.args.past_index >= 0:

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/transformers/models/swinv2/modeling_swinv2.py:1274, in Swinv2ForImageClassification.forward(self, pixel_values, head_mask, labels, output_attentions, output_hidden_states, return_dict)
   1266 r"""
   1267 labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
   1268     Labels for computing the image classification/regression loss. Indices should be in `[0, ...,
   1269     config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
   1270     `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
   1271 """
   1272 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
-> 1274 outputs = self.swinv2(
   1275     pixel_values,
   1276     head_mask=head_mask,
   1277     output_attentions=output_attentions,
   1278     output_hidden_states=output_hidden_states,
   1279     return_dict=return_dict,
   1280 )
   1282 pooled_output = outputs[1]
   1284 logits = self.classifier(pooled_output)

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/transformers/models/swinv2/modeling_swinv2.py:1076, in Swinv2Model.forward(self, pixel_values, bool_masked_pos, head_mask, output_attentions, output_hidden_states, return_dict)
   1069 # Prepare head mask if needed
   1070 # 1.0 in head_mask indicate we keep the head
   1071 # attention_probs has shape bsz x n_heads x N x N
   1072 # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads]
   1073 # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length]
   1074 head_mask = self.get_head_mask(head_mask, len(self.config.depths))
-> 1076 embedding_output, input_dimensions = self.embeddings(pixel_values, bool_masked_pos=bool_masked_pos)
   1078 encoder_outputs = self.encoder(
   1079     embedding_output,
   1080     input_dimensions,
   (...)
   1084     return_dict=return_dict,
   1085 )
   1087 sequence_output = encoder_outputs[0]

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/transformers/models/swinv2/modeling_swinv2.py:295, in Swinv2Embeddings.forward(self, pixel_values, bool_masked_pos)
    292 def forward(
    293     self, pixel_values: Optional[torch.FloatTensor], bool_masked_pos: Optional[torch.BoolTensor] = None
    294 ) -> Tuple[torch.Tensor]:
--> 295     embeddings, output_dimensions = self.patch_embeddings(pixel_values)
    296     embeddings = self.norm(embeddings)
    297     batch_size, seq_len, _ = embeddings.size()

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/transformers/models/swinv2/modeling_swinv2.py:353, in Swinv2PatchEmbeddings.forward(self, pixel_values)
    351 # pad the input to be divisible by self.patch_size, if needed
    352 pixel_values = self.maybe_pad(pixel_values, height, width)
--> 353 embeddings = self.projection(pixel_values)
    354 _, _, height, width = embeddings.shape
    355 output_dimensions = (height, width)

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
   1496 # If we don't have any hooks, we want to skip the rest of the logic in
   1497 # this function, and just call forward.
   1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
   1499         or _global_backward_pre_hooks or _global_backward_hooks
   1500         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501     return forward_call(*args, **kwargs)
   1502 # Do not call functions when jit is used
   1503 full_backward_hooks, non_full_backward_hooks = [], []

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:463, in Conv2d.forward(self, input)
    462 def forward(self, input: Tensor) -> Tensor:
--> 463     return self._conv_forward(input, self.weight, self.bias)

File /opt/conda/lib/python3.10/site-packages/torch/nn/modules/conv.py:459, in Conv2d._conv_forward(self, input, weight, bias)
    455 if self.padding_mode != 'zeros':
    456     return F.conv2d(F.pad(input, self._reversed_padding_repeated_twice, mode=self.padding_mode),
    457                     weight, bias, self.stride,
    458                     _pair(0), self.dilation, self.groups)
--> 459 return F.conv2d(input, weight, bias, self.stride,
    460                 self.padding, self.dilation, self.groups)

RuntimeError: GET was unable to find an engine to execute this computation

I'm not sure that what happen basically this error NOT show when I run my pipeline but a few day ago is appear. How to fix them

Swintransformer from hunggingface : 'microsoft/swinv2-tiny-patch4-window8-256' from transformers import AutoModelForImageClassification, AutoImageProcessor

!pip install transformers==4.26.0 !pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu117 !pip install tensorflow --upgrade

How to fix them to working NOT error.........



Fit Next/Image component inside div whilst maintaining aspect ratio and border rounding

I've been trying to style a NextJS image with rounded corners whilst also growing to fit it's containing div (Blue on image, difficult to see but there) and maintaining aspect ratio (unknown until runtime). All that is broken with what I currently have, is the image not getting the border-radius but the box surrounding it does (Black on image). I cannot find a way to get the border radius to work without hard coding the image size, as that must be dynamic. The only other vector to consider is that this is all contained inside another fixed positioned div (Red on image) that contains the whole popup.

Demo image

I have tried the below from sugguestions in other threads and it almost works, the only issue I have found is that the image does not recieve the rounded corners due to it's box being larger than the content and thus rounding that box and not the image.

{/* Card that shows on click */}
            <div className='fixed z-10 w-full h-full left-0 top-0 invisible bg-black/[.6]' id={'hidden-card-' + title} onClick={hideEnlargement}>
                <div className='w-[80%] h-[80%] translate-x-[calc(50vw-50%)] translate-y-[calc(50vh-60%)] rounded-2xl'>
                    <Image
                        src={img} 
                        alt={alt}
                        quality={100}
                        className="rounded-2xl bg-black m-auto"
                        fill={true}
                        style=
                    />

                </div>
                
            </div>


Use Greasemonkey to change URL links within a given domain

I'm new to Greasemonkey (Tampermonkey actually), and I'd like to write a very short script that:

  • Is valid within a given domain (e.g. "mydomain.com")

  • Parses all button-related URL links within the active tab

  • Replaces them as follows :

    Original URL link: [string_1]/[useful_part]?[string_2]

    To be replaced with: [replacement_1]/[useful_part]

    So everything after the "?" can be discarded, including the "?" itself.

More specifically, URL links are as follows:

http://127.0.0.1:6878/webui/player/[useful_part]?autoplay=true

So string 1 = "https://ift.tt/JuzOUHN" and string 2 = "autoplay=true"

I've seen a similar question here: Rewrite parts of links using Greasemonkey and FireFox

But I'm not good enough at RegEx, so I couldn't adapt the script to my own needs.

I've also looked for Firefox extensions, but the available extensions don't seem to allow the level of text replacement that I'm seeking.



Random numbers between -1 and 1 summing to 0

With R, how to generate n random numbers x_1, ..., x_n that lie between -1 and 1 and that sum to 0?

What about the generalization to another sum and another range?



Getting rid of unwanted commits

My branch was behind master, I used git pull origin master and now I have 44 commits not mine and I don't want to push them along with my changes. How do I fix this?

enter image description here



2023-03-24

What is the name of the Master Stencil used by the Organization Chart Wizard in Visio?

I've looked everywhere to see what the Master Stencil is called that the Organization Chart Wizard uses, in order to reference it in some VBA but I can't find it anywhere. Anyone know what it's called?



foreach in powershell to retrieve processes

The output of processes does not show in the results. I'm logging into each server and pulling the processes. If I do just one server without the foreach it outputs the processes, however when I add a for each with a list of servers, it doesn't show the results

$serverListFile = "servers.csv"
$global:ServerLists = Import-Csv -Path $global:ScriptPath\$ServerListFile -Delimiter "," | ForEach-Object  {
    $_.servers

    $currentServer = $_.servers
    Write-Host $currentServer

    Write-Host "Getting first 5 Processes on" $currentServer

    Invoke-Command -ComputerName $_.servers -Credential $Cred -ScriptBlock {
        Get-Process | Select-Object -First 5
    }
}

Here are the results:

Server1
Getting first 5 Processes on Server1
Server2
Getting first 5 Processes on Server2
Server3
Getting first 5 Processes on Server3


error is always displayed on Azure AD B2C, why?

I am trying to display a custom error, idea is user may have several MFA setup, user introduces login, then goes to a screen where he needs to select the MFA that he wants to use, if the MFA is in a list, then he can proceed, if it is not, then we need to display an error. This is my code:

...

<OrchestrationStep Order="3"
                                   Type="ClaimsExchange"
                                   ContentDefinitionReferenceId="api.selfasserted">
                    <ClaimsExchanges>
                        <ClaimsExchange Id="MFAConfigChecks"
                                        TechnicalProfileReferenceId="SelfAsserted-GettingSelectedMFAParameter" />
                    </ClaimsExchanges>
                </OrchestrationStep>

...

 <TechnicalProfile Id="SelfAsserted-GettingSelectedMFAParameter">
                <DisplayName>Getting selected MFA Parameter</DisplayName>
                <Protocol Name="Proprietary"
                          Handler="Web.TPEngine.Providers.SelfAssertedAttributeProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
                <Metadata>
                    <Item Key="ContentDefinitionReferenceId">api.selfasserted</Item>
                    <Item Key="UserMessageIfClaimsTransformationBooleanValueIsNotEqual">testeando.</Item>
                </Metadata>
                <IncludeInSso>false</IncludeInSso>
                <InputClaims>
                    <InputClaim ClaimTypeReferenceId="preferredAuthenticationMethodCollection" />
                    <InputClaim ClaimTypeReferenceId="NoMFAConfig" />
                </InputClaims>
                <OutputClaims>
                    <OutputClaim ClaimTypeReferenceId="selectedAuthenticationMethod"
                                 Required="true" />

                </OutputClaims>
                <OutputClaimsTransformations>

                    <OutputClaimsTransformation ReferenceId="SelectedMFAIsConfigured" />
                    <OutputClaimsTransformation ReferenceId="NeedToDisplayError" />

                </OutputClaimsTransformations>
                <ValidationTechnicalProfiles>
                    <ValidationTechnicalProfile ReferenceId="AssertBooleanSelectedMFAError" />
                </ValidationTechnicalProfiles>
            </TechnicalProfile>

...

<TechnicalProfile Id="AssertBooleanSelectedMFAError">
                    <DisplayName>Unit test</DisplayName>
                    <Protocol Name="Proprietary"
                              Handler="Web.TPEngine.Providers.ClaimsTransformationProtocolProvider, Web.TPEngine, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
                    <OutputClaims>
                        <OutputClaim ClaimTypeReferenceId="AreConditionsMet"
                                     DefaultValue="false" />
                    </OutputClaims>
                    <OutputClaimsTransformations>
                        <OutputClaimsTransformation ReferenceId="MFAConditionsMet" />
                    </OutputClaimsTransformations>
                    <UseTechnicalProfileForSessionManagement ReferenceId="SM-Noop" />
                </TechnicalProfile>

I have tested so far for AreConditionsMet and at this stage, when this is true the error is still being display, what am I doing wrong? I think it is taking the default value in any case, but I don't get why it does not workout the value in the SelfAsserted-GettingSelectedMFAParameter technical profile.

<ClaimsTransformation Id="MFAConditionsMet"
                              TransformationMethod="AssertBooleanClaimIsEqualToValue">
            <InputClaims>
                <InputClaim ClaimTypeReferenceId="AreConditionsMet"
                            TransformationClaimType="inputClaim" />
            </InputClaims>
            <InputParameters>
                <InputParameter Id="valueToCompareTo"
                                DataType="boolean"
                                Value="true" />
            </InputParameters>
        </ClaimsTransformation>


fetch() POST request returns "Error 415 Unsupported Media Type"

When trying to upload a PDF file using fetch(), I keep getting a 415 error. The PDF file is saved in the same directory as the js file, and the name is definitely correct.

async function uploadFile(filePath, extension, timestamp) {

    const url = "https://api.hubapi.com/files/v3/files";
    // const url = "https://api.hubapi.com/filemanager/api/v3/files/upload"; // also doesn't work.
    var filename = `${filePath}.${extension}`;

    var fileOptions = {
        access: 'PRIVATE',
        overwrite: false,
        duplicateValidationStrategy: 'NONE',
        duplicateValidationScope: 'ENTIRE_PORTAL'
    };

    var formData = {
        file: fs.createReadStream(filename),
        fileName: `${filename} (${timestamp}).${extension}`,
        options: JSON.stringify(fileOptions),
        folderPath: 'Quotations'
    };

    try {
        const response = await fetch(url, { 
            "method": "POST",
            "formData": formData,
            "headers": {
                'Authorization': `Bearer ${process.env.ACCESS_TOKEN}`,
                "Content-Type": "application/pdf"
            }
        });

        if(!response.ok) {
            throw new Error(`Error. Response not ok: ${response.status}`);
        }
        
        const data = await response.json();
        return data;

    } catch(error) {
        console.log(`Error: ${error}`);
    }

}

const fileId = uploadFile("Quotation", "pdf", getCurrentTimestamp());
fileId.then((data) => console.log(data));

I have tried changing the "Content-Type" in headers to several different options, and all spit back 400 and 415 errors:

"Content-Type": "multipart/form-data" - "Error: Error: HTTP Error: 400".

"Content-Type": "application/json" - "Error: Error: HTTP Error: 415".

"Content-Type": "text/plain" - "Error: Error: HTTP Error: 415".

Excluding "Content-Type" returns - "Error: Error: HTTP Error: 400".

When I use request.post() instead of fetch(), I can upload the file. But I want to use fetch().



Replace empty values with values of other records in a dataframe

I have the following problem

I need to replace the empty cel values in dataframe with the values of the costs of CustomerNr of other records

df1 = pd.DataFrame([[1004,''], [1004, 'D'],[1005, 'C'], 
                   [1010,'A'], [1010,''],[1010,''],[1010,''],[1004, '']], columns=['CustomerNr','Costs'])
CustomerNr Costs
1004
1004 D
1005 C
1010 A
1010
1010
1010
1004

Desired output:

CustomerNr Costs
1004 D
1004 D
1005 C
1010 A
1010 A
1010 A
1010 A
1004 D


2023-03-23

Move custom attribute below product title on shop page in WooCommerce

Afternoon Stack Overflow,

I've been trying to move the custom div I've created which displays a custom attribute on the shop grid for products. I want the small text which says '2% Nic Salt, 600 Puffs, Inhale Activated' below the product title.

I'm using this code to display the attribute.

    // Fail Safe
add_action('woocommerce_after_shop_loop_item_title', 'display_shop_loop_product_attributes');
function display_shop_loop_product_attributes() {
    global $product;

    // List the correct Attributes via pa_
    $product_attribute_taxonomies = array( 'pa_grid-attributes', 'pa_grid', 'pa_styling', 'pa_number' );
    $attr_output = array(); // Initializing

    // Loop through
    foreach( $product_attribute_taxonomies as $taxonomy ){
        if( taxonomy_exists($taxonomy) ){
            $label_name = wc_attribute_label( $taxonomy, $product );

            $term_names = $product->get_attribute( $taxonomy );

            if( ! empty($term_names) ){
                $attr_output[] = '<span class="'.$taxonomy.'">'.$term_names.'</span>';
            }
        }
    }

    // Display 
    echo '<div class="product-attributes-grid">'.implode( '<br>', $attr_output ).'</div>';
}

Using Version 7.4.1

Tried to move the string editing functions.php with the visual hook guide to no success.



Create Performance Score, by Month using Switch with Multiple Conditions

novice PBI/DAX user here. I'm trying to add a SCORING assessment for this KPI using IF statements. The KPI is calculated based on MONTH date bins. It doesn't seem to be working. Any ideas appreciated?

Here is a sample of what I am receiving. I've circled SOME of the values that should be either Successful or Outstanding.

Snippet PBI Report

1

Here is the measure that calculates Avg Calls / AE (Person) / Month which is referenced in the IF statement.

Avg Calls / AE / Month Calculation

2

The expected results should show some SUCCESSFUL and OUTSTANDING results.

UPDATE:

I created a NEW TABLE (New Opportunities Created - SUMMARIZED TABLE) from the original table which contains row level detail (New Opportunities Created).

Then I created a new column to calculate SCORE:

Score = SWITCH(TRUE(),
'New Opportunities Created - SUMMARIZED TABLE'[Segment] IN {"Field Sales", "LTL"} && 'New Opportunities Created - SUMMARIZED TABLE'[#Opps]<3.5, "NI",
'New Opportunities Created - SUMMARIZED TABLE'[Segment] IN {"Field Sales", "LTL"} && 'New Opportunities Created - SUMMARIZED TABLE'[#Opps]>4.3, "Exceeds", "Meets"
)

Below is the result. It works at the MONTHLY level, however, it is not working for the COLUMN and ROW TOTALS.

Revised Table



.Net test multiple connections for AD authentication using system.web.security (membership)

Most of our .net IIS webapps use the System.Web.Security in .net to validate user logins in AD. However, during a company transition period, we currently have 4 AD domain controllers. I am looking for a way to test the AD user login against other AD connection strings in the event that one is down or that a user is using a different domain password.

Our code is just:

bool isValidAdUser = Membership.ValidateUser(model.UserName, model.Password);

And the connection string is set inside the Membership tag in web.config. How I can I add more connection strings?



Metrics to compare two sets of 1D points

I'm training an AI model to predict where to place train stations along a train track.

I want to feed my model some information about the train track, which generates a series of points A, B, C, ... that correspond to stations that I should place on the train track.

A prediction P could look like:

=====A========================B===============================C==========

I also have a ground truth T for my training examples. For instance, that could look like:

=========A====================B=========C=============================D==

Now, what I want is a metric to measure my model.

I've been thinking about possible solutions, but none of the usual candidates seem to fit this problem. Some ideas I've considered are:

Creating a custom F1 score

To do so, get the distance of each point in P with respect to the closest point in T (precision). Likewise, get the distance of each point in T with respect to the closest one in P (recall). Duplicates are allowed (for instance, T(B) and T(C) would both be matched against P(B)).

This suffers from a small problem, however. Consider an alternative P:

=====A====================B=C=D=E=F===========================G==========

In this case, precision and recall would remain more or less the same. However, I would like the score to reflect the fact that many more points than expected have been placed along the track.

Metrics for Time Series

During my research of metrics I've searched on the time series domain just in case there was something similar to this problem but I've found nothing that closely resembles this scenario.

Things that the score should reflect

  1. T has more points than G.
  2. G has more points than T.
  3. The distance between predicted points and ground truth ones and vice versa.

I don't want to reinvent the wheel, and this seems like a (possibly) common scenario.

Is there anything I might be missing?